AI
Visa Initiative to Invest $100 Million in Generative AI Ventures
In a move set to reshape the landscape of commerce and payments, Visa has declared its intention to invest $100 million in companies at the forefront of developing generative AI technologies.
The investment initiative will be executed through Visa Ventures, the global corporate investment arm with a history spanning 16 years. Visa, a trailblazer in AI applications for payments since 1993, is now focusing its attention on the burgeoning field of generative AI. This subset of artificial intelligence is characterized by its ability to generate text, images, or other content based on extensive datasets and textual prompts.
Jack Forestell, Chief Product and Strategy Officer of Visa emphasized the profound impact generative AI will have, stating, “While much of generative AI so far has been focused on tasks and content creation, this technology will soon not only reshape how we live and work, but it will also meaningfully change commerce in ways we need to understand.”
David Rolf, Head of Visa Ventures, underscored the transformative potential of generative AI, calling it “one of the most transformative technologies of our time.” He noted that Visa Ventures possesses flexibility in terms of the number and size of investments, expressing an interest in making a range of smaller investments in the early stages of the industry.
Rolf outlined the criteria for potential investments, specifying that Visa is seeking to support companies applying generative AI to address real challenges in commerce, payments, and fintech. This includes B2B processes around payments and infrastructure with the potential to significantly impact commerce. Rolf emphasized that Visa is open to engaging with companies at various levels of the technology stack, from data organization for generative AI to end-user experiences.
Also Read: Arc Raises $70 Million to Build the Tesla of Boats
Furthermore, responsible AI use aligning with Visa’s policies is a key consideration. Rolf stated, “One of our key considerations is how well these companies are practicing responsible use of AI, in line with Visa’s policies.”
This announcement follows Visa’s strategic move to appoint Marie-Elise Droga as the head of fintech, who noted that her team frequently collaborates with the Visa Ventures team. This collaboration serves as a scouting engine, identifying innovative startups that align with Visa’s vision for the future of commerce and payments. As Visa Ventures embarks on this $100 million investment journey, the landscape of generative AI technologies in commerce and payments is poised for significant transformation.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.
AI Startup Corti Raises $60 Million to Take on Microsoft in Health Care
Copenhagen-based startup Corti has secured a substantial $60 million in funding to further advance its mission of revolutionizing the healthcare industry through AI technology.
The investment was led by Prosus Ventures and Atomico, with participation from previous backers Eurazeo, EIFO, and Chr. Augustinus Fabrikker. Although the company has remained tight-lipped about its valuation, its remarkable growth in customers and usage speaks volumes about its impact in the sector.
Just two years ago, Corti raised $27 million in a Series A round when it was assisting in 15 million consultations annually. Now, the company proudly serves 100 million patients each year, with its AI assistant being utilized a staggering 150,000 times daily. This translates to approximately 55 consultations daily across Europe and the United States. Corti boasts that its tools can enhance healthcare workers’ accuracy in outcome predictions by up to 40% while making administrative tasks 90% faster.
Corti’s innovative AI service is often described as an “AI co-pilot” for healthcare professionals, covering various areas of patient care. It assists in triaging during patient interactions, documents entire interactions, offers in-depth analysis to guide decision-making, provides second opinions, and generates real-time and post-meeting notes to identify areas for improvement and clinician training.
Corti’s success reflects the growing adoption of AI in healthcare, particularly after the COVID-19 pandemic highlighted the need for efficient and accurate medical support. The startup has attracted a diverse range of customers, including emergency services in Seattle, Boston, and Sweden, as well as numerous hospitals and medical services.
Unlike some competitors that rely on existing AI models, Corti has taken a unique approach by developing its own AI models and components. Notably, the company has not employed in-house medical experts to avoid introducing bias into its system. Instead, Corti engaged researchers to fine-tune its AI, resulting in a more responsive and functional platform.
While initial skepticism and concerns about job replacement were common when Corti first launched in 2018, the broader acceptance of AI, epitomized by technologies like ChatGPT, has paved the way for more productive conversations. Corti aims to make AI in healthcare a mundane and indispensable part of the industry, steering clear of the contentious debates about its role.
Despite differing opinions on the impact of AI in medicine, Corti’s funding round signals a commitment to improving healthcare efficiency and provider capabilities. With the support of visionary founders Andreas Cleve and Lars Maaløe, Corti is poised to redefine the patient and healthcare experience, ultimately enabling more personalized, preventative, and proactive medicine in a rapidly evolving industry.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.
Microsoft Says It Will Protect Customers from AI Copyright Lawsuits
In response to growing concerns about the misuse of artificial intelligence (AI) in generating harmful content, Microsoft has pledged to take significant steps to protect its customers from potential legal repercussions.
This commitment comes as Australia is set to implement a new code that mandates search engines like Google and Bing to prevent the dissemination of child sexual abuse material created by AI.
The new code, drafted at the Australian government’s request, seeks to ensure that search engines do not return results that include AI-generated child sexual abuse material. It also prohibits AI functions integrated into these search engines from producing synthetic versions of such harmful content, commonly referred to as deepfakes.
According to e-Safety Commissioner Julie Inman Grant, the rapid proliferation of generative AI has taken the world by surprise. She emphasized that the code signifies a crucial development in the regulatory and legal landscape surrounding internet platforms. This landscape is evolving in response to the explosion of products that automatically generate lifelike content, presenting new challenges and responsibilities for tech giants like Google and Microsoft.
Inman Grant highlighted that an earlier code proposed by Google and Microsoft did not address AI-generated content adequately. Consequently, she called upon these industry giants to reevaluate and improve the code to align with the emerging AI landscape.
“When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant explained.
Microsoft’s commitment to protecting its customers from AI-generated content reflects its dedication to responsible AI development and its recognition of the evolving legal and ethical concerns associated with AI. As a responsible tech leader, Microsoft is poised to play a pivotal role in shaping the industry’s response to these challenges.
This development comes on the heels of the Australian regulator registering safety codes for various other internet services, including social media, smartphone applications, and equipment providers. These codes are set to take effect in late 2023, marking a significant milestone in Australia’s efforts to ensure online safety and security.
Read More: Australia to Require AI-made Child Abuse Material to be Removed from Search Results
While this regulatory initiative is a positive step towards addressing the risks posed by AI-generated content, it also raises questions about privacy and the balance between security and personal freedoms. The regulator is still in the process of developing safety codes related to internet storage and private messaging services, an endeavor that has faced resistance from privacy advocates worldwide.
In conclusion, Microsoft’s commitment to protecting its users from AI-generated harmful content is a proactive response to evolving challenges in the digital landscape. As technology continues to advance, it is imperative for industry leaders to collaborate with regulators and stakeholders to establish responsible guidelines and practices for the responsible use of AI.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.
Australia to Require AI-made Child Abuse Material to be Removed from Search Results
In a significant move to combat the proliferation of child sexual abuse material generated by AI, Australia’s internet regulator has announced that it will mandate search engines such as Google and Bing to take strict measures to prevent the dissemination of such harmful content.
This initiative comes as part of a new code drafted in collaboration with industry giants at the Australian government’s request, aimed at safeguarding the digital landscape from AI-generated child abuse material.
E-Safety Commissioner Julie Inman Grant revealed that the code, which is designed to protect the online community, imposes two crucial requirements on search engines. Firstly, it compels search engines to ensure that AI-generated child abuse material does not appear in search results. Secondly, it prohibits using generative AI to produce synthetic versions of such explicit content, commonly referred to as “deepfakes.”
“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant acknowledged. This rapid expansion of AI technology has necessitated a reevaluation of regulatory and legal frameworks governing internet platforms.
Inman Grant pointed out that an earlier code drafted by Google (owned by Alphabet) and Bing (owned by Microsoft) did not address the emerging issue of AI-generated content. Consequently, she urged these industry giants to revise their approach. “When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant emphasized.
A spokesperson for the Digital Industry Group Inc., an Australian advocacy organization representing Google and Microsoft, expressed satisfaction with the approval of the revised code. “We worked hard to reflect recent developments in relation to generative AI, codifying best practices for the industry and providing further community safeguards,” the spokesperson stated.
This development follows the regulator’s earlier initiatives to establish safety codes for various internet services, including social media platforms, smartphone applications, and equipment providers, which will take effect in late 2023. However, the regulator continues to face challenges in developing safety codes for internet storage and private messaging services, with privacy advocates worldwide voicing concerns.
Also Read: Apple Falls on Report That China Agencies Are Barring iPhone Use
As Australia takes a proactive stance in addressing the grave issue of AI-generated child abuse material, it serves as a noteworthy example of the evolving regulatory landscape surrounding internet platforms. The code aims to strike a balance between harnessing the potential of AI technology and safeguarding the well-being of online users, particularly the vulnerable, as the digital realm continues to evolve.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.
Google will add AI models from Meta, and Anthropic to its Cloud Platform
Google, a subsidiary of Alphabet Inc., is integrating more generative Artificial Intelligence into its services and promoting itself as an all-encompassing resource for cloud users looking to access the latest developments by integrating artificial intelligence technologies from firms like Meta Platforms Inc. as well as Anthropic into its cloud platform. The Llama 2 big language model from Meta as well as the Claude 2 chatbot from artificial intelligence startup Anthropic will be accessible to Google’s cloud clients, which they can then customise with company data for their services and applications.
The decision, which was made public on Tuesday during the company’s Next ’23 conference in San Francisco, serves as part of the business’s attempt to establish its platform as one where users have a choice to select an artificial intelligence (AI) model that best suits their requirements, whether from the business itself or one of its collaborators. Google Cloud customers now have access to more than 100 potent AI models and tools, the firm claimed.
The business also said that its Duet AI tool would be made more widely accessible to Workspace customers this year, with public access to follow.
On applications like Google Docs, Sheets, & Slides, users may touch a generative artificial intelligence assistant that reacts to requests to help with content creation. According to Google, Duet AI, which was unveiled in May, can translate captions into 18 different languages, deliver conference summaries, as well as take notes during video sessions.
Users may send the tool for participation in meetings on the user’s behalf, deliver messages, and provide an event report using a new feature dubbed “attend for me.”Google also announced new collaborations with businesses like GE Appliances as well as Fox Sports that would enable consumers to benefit from AI in ways like creating personalized recipes or watching a replay of a sporting event via Fox’s broadcast library.
Read More: Tech Industry Dodges California Social Media Addiction Bill
“We are in an entirely new era of digital transformation, fueled by gen AI,” Thomas Kurian, chief executive officer of Google Cloud, said in a blog post timed to the announcements. “This technology is already improving how businesses operate and how humans interact with one another.”
hindustantimes.com
I am a student pursuing my bachelor’s in information technology. I have a interest in writing so, I am working a freelance content writer because I enjoy writing. I also write poetries. I believe in the quote by anne frank “paper has more patience than person