Your Tech Story

trending

Dialogue Boost

How to use Amazon Prime’s new ‘Dialogue Boost’ feature?

One of the most annoying situations when watching TV is when you are able to hear the explosions and background music of a particular scene but not the speech. Dialogue Boost, please.

In order to prevent consumers from being compelled to enable subtitles even when they don’t want to, Amazon Prime Video is introducing a new accessibility tool that enables users to boost the volume of dialogue without adding background music or effects.

Dialogue Boost
Image Source: guidingtech.com

Raf Soltanovich, the vice president of technology at Prime Video and Amazon Studios, stated in a statement, “At Prime Video, we are committed to delivering an inclusive, egalitarian, and pleasant streaming experience for all our consumers.

Also Read: What is Google Help Me Write?

“Our library of captioned and audio-described content keeps expanding, and by utilizing our technological capabilities to develop industry-first innovations like Dialogue Boost, we are taking another step towards creating a more accessible streaming experience,” the company said.

In order to use the feature choose a movie/series that offers this feature. You may check the title’s detail page if this feature is available. Start the movie and click the drop-down menus for the audio and subtitles on your screen. From the available options, select the Dialogue Boost level that you desire.

Based on how much you want to strengthen the dialogue, you can choose between “English Dialogue Boost: Medium” and “English Dialogue Boost: High”. You can choose between the Low, Medium, or High Dialogue Boost tracks.

Try them all out to determine the one that works most effectively for you since each level gives different levels of amplification, as their names suggest. Take pleasure in the title’s louder and more understandable dialogue.

Also Read: How is the new Google AI search different from the Bard chatbot?

In order to distinguish the conversation from the background noise in a movie or television show, it looks for instances where it is being overpowered by other sounds. Later, it uses AI to raise the dialogue’s quality.

In this manner, Dialogue Boost does not amplify the entire sound of the dialogue, simply the portions that require it before it is made available more broadly, the feature will first be made available globally for a select group of Amazon Original programs, including “Tom Clancy’s Jack Ryan,” “The Marvellous Mrs. Maisel,” and others.

It is accessible on all platforms that Prime Video supports. Even though Amazon is the very first global streaming platform to offer this capability, other platforms, including Roku, also offer a feature called “speech clarity.”

Google I/O

Google I/O 2023: How to watch and what to expect

Google I/O 2023 will take place on May 10th. Google I/O is an annual developer conference hosted by Google. The conference typically features keynote presentations, technical sessions, and product demonstrations focused on the company’s latest innovations in technology and software.

Google I/O
Image Source: techpp.com

In contrast to the multi-day I/Os of former years, this year’s event just lasts for that single day. Although Google is holding I/O as a live event at Shoreline Amphitheatre close to its campus in Mountain View, California, it is also being streamed on the Google I/O webpage. Google I/O can be live-streamed by any individual who registers for free on that website. The event can also be streamed on Youtube.

Also Read: Google Rolls Out Passkeys to (Eventually) Kill Passwords

These days, everyone seems to be thinking about AI-powered chat solutions, including Google. The organisation will have the chance to discuss Bard, its in-house chat client, and how it may benefit additional Google services like search, mapping, and productivity tools at Google I/O.

Google might take the opportunity to address worries regarding the hazards that certain individuals have identified with AI in addition to discussing how AI chatbots integrate into its own line of products, as one might anticipate.

The company released the Android 14 beta last month, and Google I/O 2023 will be its first chance to provide a public peek to a larger audience. This year is expected to be no different, given the significant role Android has played at previous I/O conferences.

Since a completed version of Android 14 is anticipated before the end of summer, we might even have more clarification on the development timeframe. Long rumoured to be in the works, Google’s first foldable device astonished everyone last week when it became public.

The maker shared a picture and a video of the Pixel Fold, a device that can be folded horizontally like a book. While Google declined to provide any information regarding its features, a prior claim from CNBC indicates the device may have a display that is 5.8 inches when it is closed and 7.6 inches when it is opened, making it comparable to Samsung’s Galaxy Z Fold 4.

Also Read: Google Authenticator finally syncs one-time codes in the cloud

The main announcement at Google I/O 2023 is expected to be a new piece of hardware for the company. There have been persistent rumours of a Pixel Fold that would disrupt the foldable phone market, which is now dominated by Samsung products. Google has now essentially acknowledged that this will take place.

It is anticipated that at least one new phone will be released by Google, especially in light of the rumours that claim the Pixel 7a will go on sale during that time.

Rapid Security Response

What is Apple’s rapid security response?

Apple on Monday made the initial release of “rapid security” updates available to the public in an effort to quickly address security flaws that are being actively exploited or represent a serious risk to its users. The so-called Rapid Security Response upgrades, according to a notice, “deliver important security improvements between software updates.”

Rapid Security Response
Image Source: 9to5mac.com

Rapid Security Responses were made available so that Apple consumers may update their devices more quickly than it normally takes for a software update.

Apple claims that the feature is turned on by default and that, with some exceptions, some quick patches can be downloaded without a reboot.

Also Read: Is advertising the future of streaming?

For users of iOS 16.4.1, iPadOS 16.4.1, and macOS 13.3.1, the rapid security update is now available. After being installed, it will change the software version to a letter, such as iOS 16.4.1(a), iPadOS 16.4.1(a), and macOS 13.3.1(a).

The quick security fix won’t be available to users of earlier versions of Apple’s os. In later software updates, according to Apple, fixes will be added.

The deployment on Monday, however, hasn’t gone as planned. Some customers reported having trouble installing the update. On an iPhone, iPad, and Mac used for testing by TechCrunch, the upgrades were downloaded but were not instantly installed.

Researchers have recently found new exploits created by spyware producers QuaDream and NSO Group that target iPhone users worldwide. Both spyware producers took advantage of previously unknown flaws in Apple software that let their government clients steal data covertly from a victim’s device.

Citizen Lab reported last month that Apple’s Lockdown Mode, a feature introduced last year to thwart similar targeted attacks, had effectively stopped at least one NSO-developed attack that took advantage of a flaw in HomeKit, the company’s smart home feature.

Apple’s rapid security response refers to the company’s approach to quickly identifying and addressing security vulnerabilities in its products. Apple has a dedicated team of security experts who work to proactively identify potential security risks and develop solutions to mitigate them.

Additionally, Apple has implemented various security measures such as two-factor authentication, encryption, and sandboxing to prevent unauthorized access to its products and services. In the event that a security vulnerability is discovered, Apple typically responds quickly by releasing a software update to address the issue.

The company also works closely with security researchers to identify and address vulnerabilities before they can be exploited by malicious actors.

Overall, Apple’s rapid security response is an essential aspect of the company’s commitment to protecting its users’ privacy and security.

advertising

Is advertising the future of streaming?

The rise of streaming services has revolutionized the way we consume media. With a plethora of streaming options, from Netflix to Hulu to Disney+, there has never been a better time to be a viewer.

However, with the increased competition between streaming services, the need for revenue has become more critical. Advertising has emerged as one of the ways streaming services can generate revenue, but is it the future of streaming?

Image Source: adage.com

Advertising has been a part of television for decades, and with the rise of streaming services, it was only a matter of time before it made its way into the world of online streaming.

The appeal of advertising is clear: it generates revenue for the streaming service, allows for more affordable subscription prices, and can provide targeted advertisements to viewers. This is especially beneficial for smaller streaming services that don’t have the financial power of Netflix or Amazon Prime.

Also Read: Why is Amazon shutting down Halo Division?

The fastest-growing segment of the streaming industry right now is free, ad-supported platforms. Many platforms have quietly accumulated large content collections and millions of users over the course of their existence.

And now, they’re beginning to make a greater impact as consumers hunt for cheaper means to access entertainment and companies look for innovative ways to monetize. Free streaming has its allure already present in the name—it’s cost-free!

According to an increasing number of streaming customers, they already pay more than they would like to for their subscriptions, and a Deloitte poll conducted in the fall of last year indicated that 44% of respondents had canceled at least one subscription service in the previous six months.

In addition, Deloitte discovered that 59% of customers would be content to view a few adverts per hour as a substitute for a less expensive or even free subscription.

Netflix has already found that their ad-supported option, which costs $6.99 a month and features a few advertisements per hour, generates more revenue per user than pure subscriptions. There is also an ad-supported option for Disney Plus as well. As does Peacock, the newest Max service, and a growing portion of the rest of the sector. It seems that advertisements are the streaming industry’s future.

Also Read: Amazon sees cloud slowdown in April, shares erase gains

However, it may have its drawbacks too. Advertising may not be enough to sustain smaller streaming services in the long run. The streaming market is becoming increasingly saturated, with new services popping up all the time. To stay competitive, streaming services need to offer something unique, and advertising may not be enough to differentiate them from the competition.

Services like Netflix and Amazon Prime have the financial power to invest in original content, which is a significant draw for viewers. Smaller services may not have the same luxury, and relying solely on advertising may not be enough to keep them afloat.

AI

Is AI getting better at mind-reading?

Artificial intelligence (AI) technology has made significant advances in recent years, but it’s important to note that AI does not possess a “mind” in the same way humans do. Therefore, the term “mind-reading” is not an accurate description of AI capabilities.

AI
Image Source: readamag.com

However, AI can be trained to predict and infer human behavior and thoughts to a certain extent by analyzing patterns and data. For example, machine learning algorithms can be trained to recognize facial expressions and body language, and infer the emotions and mental states of individuals.

Also Read: Amazon is working to boost the capability of Alexa. Here’s how

Consider the phrases that are running through your mind: that tasteless joke you, wisely, kept to yourself at dinner; your unspoken opinion of your closest friend’s new partner. Now picture someone listening in. University of Texas at Austin researchers took another move in that direction on Monday.

An artificial intelligence (A.I.) that might interpret the private thoughts of human beings was detailed in a study that was released in the journal Nature Neuroscience. The A.I. did this by examining fMRI scans, which assess the flow of blood to various parts of the brain.

Researchers have already created language-decoding techniques to recognize speech attempts made by persons who are mute and to enable paralyzed people to write simply by thinking about writing. However, the new language decoder is among the first to do so without the use of implants.

When respondents watched silent films as part of the study, it was capable to produce fairly accurate accounts of what was occurring onscreen and turn a person’s mental phrases into actual speech.

Three volunteers, who spent 16 hours over many days in Dr. Huth’s lab listening to “The Moth” and other narrative podcasts, were the focus of the study. An fMRI scanner monitored the blood oxygen levels in various regions of their brains while they were listening.

The brain activity patterns were then compared to the words and sentences the subjects had heard using a comprehensive language model.

According to Osaka University neuroscientist Shinji Nishimoto, “Brain activity is a kind of encrypted signal, and language models provide ways to decipher it.” Using another A.I. to convert the participant’s fMRI images into words and sentences, Dr. Huth and his colleagues successfully reversed the process in their study.

Also Read: Did Elon Musk unwittingly expose his alt-Twitter account?

The participants listened to fresh recordings while the researchers evaluated the decoder to assess the degree to which the translation resembled the genuine transcript. Though nearly every word in the decoded script was misplaced, the passage’s meaning was frequently kept intact. The decoders were effectively summarising.

Additionally, participants were able to mask their internal monologues by diverting their attention away from the decoder. A.I. may be able to read our minds, but for the time being it will need our consent and will need to read each thought individually.

Amazon

Amazon is working to boost the capability of Alexa. Here’s how

Amazon Alexa, the voice-activated assistant, has quickly become a household name since its debut in 2014. It has evolved from a simple device that could answer basic questions to a fully-fledged platform that can control smart homes, plays music, set reminders, order groceries, and more.

Amazon
Image Source: wired.com

But Amazon is not stopping there. The company is working on boosting the capability of Alexa even further in several ways. Amazon CEO Andy Jassy revealed Monday that the company is developing a more “generalized and capable” large language model (LLM) to support Alexa.

Also Read: Amazon sees cloud slowdown in April, shares erase gains

Although Alexa has been powered by an LLM from Amazon, according to Jassy, the tech giant is currently developing a new LLM that will be more powerful. Jassy is certain that the addition of a better LLM will aid Amazon in its mission to create “the world’s best personal assistant,” but he also noted that it will be challenging to do so across a variety of fields.

“Large Language Model” (LLM) refers to a type of artificial intelligence technology that is used to process and analyze natural language data. Large Language Models are a type of deep learning model that uses neural networks to process large amounts of text data and generate outputs, such as text or speech.

Some examples of LLMs include OpenAI’s GPT series, Google’s BERT, and Facebook’s Roberta. These models are trained on massive datasets, often consisting of billions of words, in order to learn patterns and relationships in language. They use this knowledge to perform a variety of natural language processing tasks, such as language translation, sentiment analysis, and text generation.

LLMs have a wide range of applications, from language translation and sentiment analysis to chatbots and voice assistants. They are particularly useful in situations where a large amount of text data needs to be processed quickly and accurately.

Jassy emphasized that while Amazon has had years to invest in AI and LLMs, small businesses do not have the resources to do so, which is why the business released Bedrock earlier last month.

By using pre-trained models from firms like AI21 Labs, Anthropic, and Stability AI, Bedrock offers a method to create generative AI-powered applications. Bedrock also provides access to Titan FMs (foundation models), a collection of models that AWS has internally trained and is available in a “limited preview.”

Also Read: Amazon plans to trim employee stock awards amid tough economy

ChatGPT has taken over the internet and grown in popularity since its debut last year. Given the hype associated with ChatGPT, it should come as little surprise that top technology businesses are attempting to improve their own products using LLM in order to keep up with the rapidly evolving AI market.

The Information stated recently that Apple is creating Siri upgrades based on LLM. Google is probably taking similar steps with Assistant.