Your Tech Story

artificial intelligence

Meta

Meta announces AI training and inference chip project

To further assist artificial intelligence work, Meta Platforms (META.O) revealed additional information on its data centre initiatives on Thursday. This information included a proprietary chip “family” that is being developed internally.

In a collection of blog articles, the owner of Instagram, as well as Facebook, said that as an element of the Meta Training and Inference Accelerator (MTIA) programme, it would be developing a first-generation microprocessor in 2020. The goal was to make suggestive models, which are used to distribute adverts and other material in news feeds, more effective.

Meta
Image Source: moneycontrol.com

It was previously reported by Reuters that the company was already working on an upgrade and did not have intentions to fully use its first internal AI processor. In the blog posts, the very first MTIA chip was advertised as an educational tool.

Also Read: OpenAI to introduce ChatGPT app for iOS

The articles claim that the early MTIA chip focused exclusively on the inference AI method, which uses computers educated on massive amounts of data to decide what should be displayed.

Software developer Joel Coburn from Meta stated amid a talk on the new processor that the company had first used GPUs, for inference operations but had discovered that these devices were not the best option.

“Their efficiency is low for real models, despite significant software optimizations. This makes them challenging and expensive to deploy in practice,” Coburn said. “This is why we need MTIA.”

Source: reuters.com

A Meta representative did not provide details on the forthcoming chip’s release schedule or go into further detail about the organization’s plans to create chips that might additionally train the models.

Since executives realised, the chip needed the technology and software to handle requirements from product teams developing AI-powered innovations, Meta has been working on a significant effort to update its AI architecture.

Consequently, the business abandoned intentions for a wide-scale release of an internal inference chip and began developing a more ambitious chip that could conduct training as well as inference, according to Reuters.

Although Meta’s original MTIA chip struggled with high-complexity artificial intelligence (AI) models, it tackled low- & medium-complexity models more effectively than rival chips, according to Meta’s blog entries.

The MTIA chip also utilised an open-source chip structure known as RISC-V and consumed only 25 watts of power, which is significantly less than that of renowned chips in the market from suppliers like Nvidia Corporation, according to Meta.

Also Read: Will AI Take Over The World?

The company Meta said it would start construction on the company’s initial structure this year as well as offered further details on intentions to restructure its data centres towards more advanced AI-focused networking along with cooling technologies.

In a video describing the improvements, a staff member claimed that the updated layout would be 31 per cent less expensive and could be constructed twice as rapidly as the company’s present data centres.

To assist its developers in writing computer code, Meta claimed to have a system driven by artificial intelligence, which is comparable to that provided by Alphabet Inc., Amazon.com Inc., alongside Microsoft Corp.

AI

Will AI Take Over The World?

In recent years, the rise of artificial intelligence (AI) has sparked debates and concerns about its potential to take over the world. With science fiction movies and novels portraying dystopian scenarios, it’s natural to wonder if it will surpass human intelligence and seize control.

In this blog post, we will examine the topic of AI takeover, exploring the current state of the technology, its limitations, and the ethical safeguards in place, ultimately separating the facts from the fiction.

AI
Image Source: theconversation.com

While it has made significant advancements, particularly in narrow domains, we are still far from witnessing a true AI takeover. AI systems excel in tasks such as image recognition, language processing, and decision-making within predefined parameters.

Also Read: Why does the US want to ban TikTok?

However, they lack the capability for general intelligence, which encompasses adaptability, abstract reasoning, and consciousness. AI is designed to augment human capabilities rather than replace them, offering valuable tools for various industries.

AI systems have inherent limitations that prevent them from taking over the world. One major constraint is that artificial intelligence algorithms are created and trained by humans, and their behavior is dictated by the data they are exposed to. They are essential tools created to assist humans in solving complex problems, relying on human oversight and intervention.

Furthermore, AI lacks certain human traits like emotions, empathy, and intuition, which are crucial for understanding complex social dynamics and making ethical judgments. Without these qualities, artificial intelligence systems are limited in their ability to fully comprehend and respond to the complexities of the real world.

To address concerns related to AI’s potential misuse, the development and deployment of AI systems are governed by ethical frameworks and regulations. Organizations and researchers are increasingly emphasizing the importance of responsible AI practices.

Initiatives such as explainable AI, fairness in algorithms, and transparency are being promoted to ensure AI systems operate in a trustworthy manner. Additionally, policymakers and experts are working towards establishing legal frameworks that address AI’s impact on society.

Regulations are being implemented to safeguard privacy, prevent discrimination, and maintain human control over critical decisions. These measures aim to ensure that artificial intelligence technologies are developed and used for the benefit of humanity, with human values and ethics at their core.

Also Read: Why Is Minecraft Server Mineplex Shutting Down?

Fears of a takeover are largely rooted in science fiction rather than reality. While the technology has demonstrated remarkable capabilities in specific domains, it lacks the fundamental qualities required for world domination.

The implementation of ethical safeguards and the focus on responsible artificial intelligence development will continue to ensure that AI remains a valuable tool in our hands, rather than a threat to humanity.

Google

Google Launching Tools to Identify Misleading and AI Images

In order to stop the spread of false information, especially now that photorealistic fakes are so easy to make thanks to artificial intelligence capabilities, Google is introducing a pair of updates to its image search.

The first new feature from Alphabet Inc. is named “About this image,” and it provides more information by indicating when an image or ones that are similar to it were originally indexed by Google, where they initially appeared, and other places they have appeared online.

Google
Image Source: finance.yahoo.com

The purpose is to assist users in identifying the original source while contextualizing an image with any discrediting information that may have been supplied by news organizations.

Also Read: Apple to open the first online shop in Vietnam

Every AI-generated image produced by Google’s tools will be marked as such, and the company is collaborating with other platforms and services to ensure that the markup is included in the files they distribute.

Among the publisher’s Google has on board are Midjourney and Shutterstock, and the objective is to make sure that every AI material that appears in search results is tagged as such.

Google noted in a blog post, “You’ll be able to find this tool by clicking on the three dots on an image in Google Images results, searching with an image or screenshot in Google Lens, or by swiping up in the Google App when you’re on a page and come across an image you want to learn more about.

Later this year, you’ll also be able to use it by right-clicking or long-pressing on an image in Chrome on desktop and mobile. ”

Google is expanding its image search with new features. The ‘About this image’ drop-down will offer more context, such as the date an image or one similar to it was originally indexed by Google, the location of that indexation, and the source of the image. Every AI-generated image that is made by the tech giant’s technologies will be marked.

Google announced that any image created using its own generative AI tools would have metadata indicating if it was created using AI. Using the same technology, both creators and publishers would be enabled to label their photographs.

Also Read: EU antitrust regulators seeking more info on Apple Pay

In the age of AI, the provenance of online photos is a rising problem, and several businesses are developing solutions for verification and authentication. For instance, Truepic Inc., which is supported by Microsoft, provides solutions that guarantee an image hasn’t been altered from capture to transmission.

Although Google’s new features, which are being released throughout this year, are relatively low-tech, they could have a greater positive impact if they receive enough industry backing.

Help Me Write

What is Google Help Me Write?

This year’s Google I/O had a strong artificial intelligence theme, and prompt-based text generation—especially in Google’s Office suite—was a major component of that theme. Many new features were added to some of our top Google apps, like Help Me Write, at the most recent Google I/O event.

Help Me Write
Image Source: hindustantimes.com

Google said on Wednesday at its Google I/O event that Gmail will soon have a tool that will use artificial intelligence to compose complete emails for users.

Also Read: How is the new Google AI search different from the Bard chatbot?

The “help me write” function is essentially an extension of the auto-replies and generative text Google already employs in Gmail. Sundar Pichai, CEO of Google, cited the example of requesting a flight refund when introducing the feature.

A whole message requesting the refund is created by the AI feature using data from earlier communications with the airline. The message can then be edited as desired and sent.

After the text has been created, users can edit it to make it more effective for their purposes. The I’m Feeling Lucky option is available, as well as options to formalize, elaborate, and shorten. Since March, this function has been accessible to a small group of trusted testers, and Google says a wider audience will have access to it later in the year.

Help Me Write users will be able to offer suggestions a thumbs up or down, which should help the technology become more accurate and timely over time.

This is just one illustration of how Google is enhancing Google Workspaces with Generative artificial intelligence (AI) features to provide its users with more accessibility. Help Me Write can be used to overcome writer’s block and facilitate communication between people in both professional and everyday settings.

With the new Help Me Write feature coming to Gmail and Google Docs, users will be able to construct emails and writing sections based on fewer inputs. It is essentially an enormous enlargement of the Smart Prompts technology, which currently exists and can autocomplete sentences depending on what a user typically writes.

Also Read: What is the admin review feature on WhatsApp?

Email fundamentally changed how we communicate in the five decades since its creation, but it has also grown tedious. An email has become a chore for many of our routine tasks, such as scheduling a pet sitter, asking a doctor for medical information, or requesting refunds from merchants.

Help Me Write is poised to have a significant impact on how the world communicates via email given that Gmail has 1.8 billion active users.

Google

How is the new Google AI search different from the Bard chatbot?

In an effort to dispel concerns that it is falling behind Microsoft Corp.’s OpenAI-powered Bing search, Alphabet Inc.’s Google on Wednesday showed off an improved core search product that incorporates more AI in its solutions.

Google also has a chatbot called Bard that rivals ChatGPT, the OpenAI chatbot that has sparked a lot of user interest with its human-like responses. According to the company, conventional searches on Google may still be used to find and seek out information, such as finding a product to buy.

Google
Image Source: reuters.com

Bard is a chatbot with a character that can have conversations that seem human-like and is intended to serve as a tool for creative collaboration, such as writing software code or captions for pictures.

Also Read: Google I/O 2023: How to watch and what to expect

The Search Generative Experience, the improved search feature, keeps the familiar search bar’s look and function on Google’s home page.

The difference is in the responses: if the new Google determines that generative artificial intelligence (AI) can be used to respond to a query, the AI-generated response will appear at the very top of the results page. The standard Web links are still available below.

For instance, a person searching for “weather New York” will typically be directed to an eight-day prediction, but if they search for “what to wear in the California city,” AI will give a long response, according to a test done for Reuters earlier this week.

The result included links to websites where it found such advice and said, “You should bring layers, including a short-sleeved shirt and a light jumper or jacket for the day.” Additionally, users will have access to a brand-new “conversational mode,” which works similarly to Bard and ChatGPT in that it keeps track of the user’s previous inquiries to make it simpler for users to ask follow-up questions.

However, the business emphasizes that the conversational mode is merely meant to improve search results and is not intended to be a chatbot with a personality. For instance, unlike Bard and ChatGPT, it will never use the “I” statement in its responses.

Also Read: What is the admin review feature on WhatsApp?

The Bard chatbot is designed to engage in natural language conversations with users, typically in a specific domain such as customer service or personal assistance. It uses natural language processing and machine learning to understand the user’s questions and provide helpful responses While both systems use AI, they have different goals and capabilities.

Google AI search is focused on information retrieval and presenting relevant results to users, while the Bard chatbot is focused on conversational interactions and providing assistance to users in a specific domain. Bard is already accessible without a wait list in over 180 nations, according to the business, which also wants to provide support for 40 additional languages.

AI

Is AI getting better at mind-reading?

Artificial intelligence (AI) technology has made significant advances in recent years, but it’s important to note that AI does not possess a “mind” in the same way humans do. Therefore, the term “mind-reading” is not an accurate description of AI capabilities.

AI
Image Source: readamag.com

However, AI can be trained to predict and infer human behavior and thoughts to a certain extent by analyzing patterns and data. For example, machine learning algorithms can be trained to recognize facial expressions and body language, and infer the emotions and mental states of individuals.

Also Read: Amazon is working to boost the capability of Alexa. Here’s how

Consider the phrases that are running through your mind: that tasteless joke you, wisely, kept to yourself at dinner; your unspoken opinion of your closest friend’s new partner. Now picture someone listening in. University of Texas at Austin researchers took another move in that direction on Monday.

An artificial intelligence (A.I.) that might interpret the private thoughts of human beings was detailed in a study that was released in the journal Nature Neuroscience. The A.I. did this by examining fMRI scans, which assess the flow of blood to various parts of the brain.

Researchers have already created language-decoding techniques to recognize speech attempts made by persons who are mute and to enable paralyzed people to write simply by thinking about writing. However, the new language decoder is among the first to do so without the use of implants.

When respondents watched silent films as part of the study, it was capable to produce fairly accurate accounts of what was occurring onscreen and turn a person’s mental phrases into actual speech.

Three volunteers, who spent 16 hours over many days in Dr. Huth’s lab listening to “The Moth” and other narrative podcasts, were the focus of the study. An fMRI scanner monitored the blood oxygen levels in various regions of their brains while they were listening.

The brain activity patterns were then compared to the words and sentences the subjects had heard using a comprehensive language model.

According to Osaka University neuroscientist Shinji Nishimoto, “Brain activity is a kind of encrypted signal, and language models provide ways to decipher it.” Using another A.I. to convert the participant’s fMRI images into words and sentences, Dr. Huth and his colleagues successfully reversed the process in their study.

Also Read: Did Elon Musk unwittingly expose his alt-Twitter account?

The participants listened to fresh recordings while the researchers evaluated the decoder to assess the degree to which the translation resembled the genuine transcript. Though nearly every word in the decoded script was misplaced, the passage’s meaning was frequently kept intact. The decoders were effectively summarising.

Additionally, participants were able to mask their internal monologues by diverting their attention away from the decoder. A.I. may be able to read our minds, but for the time being it will need our consent and will need to read each thought individually.