Your Tech Story

nvidia

Nvidia

Chip giant Nvidia nears trillion-dollar status on AI bet

In among of the biggest single-day surges in value around a U.S. stock, Nvidia Corporation’s shares soared 24 percent following its excellent income prediction revelation on Thursday that Wall Street had not yet priced in the AI technology’s ability to change the world.

The rise surpassed doubled the price of the stock during the year and brought the overall market worth of the chip designer up to over 939 billion dollars, which is a rise of roughly 184 billion dollars.

Nvidia
Image Source: businesstoday.in

Thus, Nvidia is now almost two times as big as TSMC, the second-biggest chip manufacturer in Taiwan. It is only behind Apple Inc., Alphabet Inc., Microsoft Corp., as well as Amazon.com Inc. in terms of US market worth.

Also Read: OpenAI’s ChatGPT app tops 500K downloads in just 6 days

The positive news also prompted a surge in the chip manufacturing sector and for businesses with a strong focus on artificial intelligence, propelling share markets from Japan to Europe. While the stock of Advanced Micro Devices, Inc. closed 11 percent better, the other Tech Giant companies ended in the range of 0.6 percent and 3.8 percent higher in the US.

consequently, the business’s strength in the marketplace for the processors that power ChatGPT along with many other services like it, experts hurried to increase their price objectives on Nvidia stock, including 27 raising their opinion that all paths in AI led to it.

Over the past twelve months, the average price goal has nearly doubled. Nvidia’s worth is expected to be near that of Alphabet under the maximum scenario, a 644.80 dollar price goal from Elazar Advisors, which values the company at 1.59 trillion dollars.

“In the 15+ years we have been doing this job, we have never seen a guide like the one Nvidia just put up with the second-quarter outlook that was by all accounts cosmological, and which annihilated expectations,” Stacy Rasgon of Bernstein said.

Source: money.usnews.com

The 5th most valuable US firm, Nvidia, forecast a quarterly profit on Wednesday that was over 50 greater than the usual Wall Street prediction & stated that it might have a greater number of AI chips available in the second half to satisfy an increase in consumption.

As generative artificial intelligence is included in each good and service, CEO Jensen Huang estimated that a total of one trillion dollars of present equipment in data centers would need to be substituted with AI chips.

The outcomes are encouraging for giant Tech firms, who have moved their attention to artificial Intelligence in the belief that the technology can boost need at a time when their key revenue generators, cloud computing as well as digital advertising, are experiencing force from an economic downturn.

Also Read: Windows 11 finally gets native RAR support

According to several analysts, Nvidia’s outcomes demonstrate that the generative artificial intelligence surge may be the next major economic catalyst.

“We’re really just seeing the tip of the iceberg. This really could be another inflection point in technological history, such as the internal combustion engine – or the internet,” said Derren Nathan, head of equity analysis at Hargreaves Lansdown.

Source: money.usnews.com
Nvidia

Nvidia short sellers lose $5 billion as shares rise more than 90%

As reported by financial data company S3 Partners, short sellers of Nvidia Corporation have suffered losses of 5.09 billion USD to date in the current year since the stock has increased by more than 90 percent.

According to the company’s Wednesday report, the stock is the top losing equity short that has occurred in 2023, which is followed by Apple & Tesla.

Nvidia
Image Source: finance.yahoo.com

According to the report, while the stock has increased approximately 30 percent in that time, Apple’s short sellers have suffered a loss of 4.47 billion USD up to this point in 2023. According to the article, Tesla’s short sellers have suffered a loss of 3.65 billion USD so far this year since the stock has increased by around 33 percent.

Also Read: Google Rolls Out Passkeys to (Eventually) Kill Passwords

For the year thus far, Nvidia’s short interest has decreased by 7.04 million shares or 18 percent. The percentage of float that is short currently stands at 1.32 percent, which is the lowliest level since October 2022.

Following a disappointing statement from Advanced Micro Devices, Inc. (AMD) late on Tuesday, Nvidia stocks were down 1.1 percent in noon trading on Wednesday, along with drops in other chip manufacturers.

Shares are borrowed by investors who offer securities “short,” anticipating a decline in the stock price that will allow them to repurchase the shares at a less expensive rate, give them back to the lender as well, and earn the difference in cost.

NVIDIA Corp. creates and produces chipsets, processors, as well as associated multimedia software for computers. Tegra Processor, The Graphics Processing Unit (GPU), & All the additional components make up its functional units.

The GPU market is made up of product brands such as GRID used for visual computing customers and is based on the cloud, Tesla along with DGX for AI data scientists & big data experts, Quadro for creators, and GeForce for gaming enthusiasts.

Also Read: IBM to pause hiring in the plan to replace 7,800 jobs with AI

The Tegra Processor section incorporates a full computer into just one chip containing multi-core central processing units and graphics processing units to power supercomputing for controllers & smartphone games and entertainment gadgets in addition to robots that are autonomous, drones, and even vehicles.

The compensation based on stock cost, business infrastructure, support costs, expenditures related to the acquisition, legal settlement expenses, and various other non-recurring charges is all included in the “All Other” division.

supercomputers

Are Google’s AI supercomputers faster than Nvidia’s?

As powerful models for machine learning continue to be the hottest topic in the tech business, Google released information about one of the company’s AI supercomputers on Wednesday, claiming it is quicker and more effective than rival Nvidia systems.

Tensor Processing Units, or TPUs, are artificial intelligence (AI) chips that Google has been developing and utilizing since 2016. Nvidia currently holds a 90% share of the overall market for AI training models and deployment.

supercomputers
Image Source: techzine.eu

As a leading innovator in AI, Google has produced several of the most significant developments in the area during the past ten years. However, some people think the company has lagged behind in commercializing its ideas, and internally, the corporation has been rushing to develop items to show it hasn’t wasted its lead, creating a “code red” situation.

Also Read: What’s Next for Users as Google Now Launcher Shuts Down?

A large number of supercomputers and a lot of processors must work simultaneously to train models, with the computers operating nonstop for weeks or months, as is the case with AI models and products like Google’s Bard or OpenAI’s ChatGPT, which are powered by Nvidia’s A100 chips.

On Tuesday, Google announced that it has developed a system with more than 4,000 TPUs connected to specialized parts intended to operate and train AI models. It has been in operation since 2020 and has been used for 50 days to train Google’s PaLM model, which challenges OpenAI’s GPT model.

The Google researchers claimed that the TPU-based supercomputer, known as TPU v4, is “1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100.” The researchers said, “The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models.”

The H100, the most recent Nvidia AI chip, was not compared to Google’s TPU results, however, because the H100 is more modern and was manufactured using more sophisticated manufacturing techniques, according to Google researchers.

Nvidia CEO Jensen Huang said that the findings for the company’s most current chip, the H100, were noticeably faster than those for the previous generation. findings and rankings from an industry-wide AI chip test called MLperf were published on Wednesday.

Given the high cost of the significant computing power required for AI, many in the sector are concentrating on creating new processors, hardware elements like optical links, or software innovations that will lower the required computing power.

Also Read: Can we use nearby share between Android and Windows?

The computational demands of AI also benefit cloud service providers like Google, Microsoft, and Amazon, who may rent out computer processing on an hourly basis and give startup companies credits or computing time to foster business partnerships. For instance, Google claimed that their TPU chips were used to train the AI image generator Midjourney.

ai supercomputer

Nvidia and Microsoft Collaborate To Build AI Supercomputers

Nvidia and Microsoft are working together over a “multi-year collaboration” to build “one of the most powerful AI supercomputer in the world,” which will be capable of handling the large processing workloads needed to teach and scale AI.

ai supercomputer
Image Source: ciobulletin.com

According to the reports, the AI supercomputer will be driven by Microsoft Azure’s cutting-edge supercomputing technology along with Nvidia GPUs, networking, and a comprehensive stack of AI software to support businesses in training, deploying, and scaling AI, including big, cutting-edge models. 

NVIDIA’s A100 and H100 GPUs will be part of the array, coupled with their Quantum-2 400Gb/s Infiniband networking technology. In particular, this would be the first public cloud to feature NVIDIA’s cutting-edge AI tech stack, allowing businesses to train and use AI on a large scale.

Manuvir Das, VP of enterprise computing at Nvidia, noted, “AI technology advances as well as industry adoption are accelerating. The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups and enabled new enterprise applications. Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalise on the transformative power of AI.”

NVIDIA will work with Azure to research and accelerate advancements in generative AI, a rapidly developing field of artificial intelligence where foundational models like the Megatron Turing NLG 530B serve as the basis for unmonitored, self-learning algorithms to generate new text, digital images, codes, audio, or video.

Additionally, the companies will work together to improve Microsoft’s DeepSpeed deep learning optimization tool. Azure enterprise clients will have access to NVIDIA’s full suite of AI processes and software development tools that have been tailored for Azure. In order to speed up transformer-based models used for huge language models, generative AI, and generating code, among other uses, Microsoft DeepSpeed will make use of the NVIDIA H100 Transformer Engine.

With twice the capacity of 16-bit operations, this technology uses DeepSpeed’s 8-bit floating point accuracy capabilities to significantly speed up AI calculations for transformers.

Verge reports that due to the recent quick expansion of these AI models, there has been a major increase in the demand for robust computer infrastructure.

The partnership is intriguing for several reasons, but notably, because it makes one of the largest computing clusters accessible to companies. Due to optimizations made in the newest ‘Hopper’ generation of NVIDIA GPUs, this will not only enable companies to train and implement AI at a level that was previously impractically pricey to do but also allow them to do it at a significant level of efficiency.

Although NVIDIA has a supercomputer of its own, the collaboration demonstrates that they are aware of the enormous computational demands placed on them by contemporary algorithms. This development represents a collaboration between two of the largest organizations in the AI industry.

Microsoft has experience in this area, as seen by its relationship with OpenAI and dedication to the development of ethical and safe AI. Contrarily, NVIDIA has been a pillar of AI development and research for the past ten years thanks to its potent GPUs and supporting tech stack, which includes CUDA and Tensor cores.

Nvidia Hackers

190GB Of Data Allegedly Stolen From Samsung Leaked By Nvidia Hackers.

LAPSUS$, the hackers group responsible for the recent Nvidia data breach, claims to have hacked Samsung and stolen nearly 200GB of sensitive data.

The 190GB trove of exposed files includes source code for Samsung’s activation servers, bootloaders and biometric unlock algorithms for all recently released Samsung devices, and trusted applets for Samsung’s TrustZone environment. The leaked data is also thought to include Qualcomm’s confidential source code.

Members of the LAPSUS$ hackers group have claimed responsibility for the data breach, posting details of the data obtained in a Telegram channel and encouraging other members to “enjoy” the contents made available for Torrent download.

According to the message, the hackers also got “a variety of other data,” but the elements listed could put Samsung device users in immediate danger of being hacked or impersonated by cybercriminals.

Because the trusted applets (TA) source codes obtained by LAPSUS$ are installed in Samsung’s Trusted Execution Environment (TEE) known as TrustZone, the hackers – and anyone who has downloaded the Torrent files – may be able to bypass Samsung’s hardware cryptography, binary encryption, and access control.

Nvidia Hackers
Image source:

The total size of the leaked data is around 190GB, which LAPSUS$ divided into three compressed files, and the torrent has already been downloaded and shared by over 400 peers.

According to a Samsung spokesperson, the company “immediately after discovering the incident” took steps to strengthen its security system.

“According to our preliminary findings, the breach involves some source code related to Galaxy device operation, but no personal information about our customers or employees. At this time, we do not expect any impact on our business or customers. We’ve put safeguards in place to prevent future incidents, and we’ll continue to serve our customers as usual “, they had informed.

Source: www.itpro.co.uk

Qualcomm has yet to respond, and it’s unclear whether the hacking group had any demands for Samsung before leaking the private information.

Researchers discovered “severe” security flaws in a long line of Samsung flagship smartphones just weeks ago, which if exploited could allow attackers to steal cryptographic keys.

It also comes just five days after Nvidia confirmed that on February 26th, the LAPSUS$ hacking group successfully breached its systems and distributed 1TB of confidential company data, including security credentials for 71,000 former and current Nvidia employees.

The data was obtained through a double extortion scheme that entailed compromising a victim and stealing data before encrypting their machine, as well as threatening to leak the stolen data if the ransom is not paid. In the last year, the number of double extortion cases has increased, with one in every seven cases resulting in the loss of sensitive information.

It is worth noting that LAPSUS$’ attacks coincide with a spike in cyber warfare due to Russia’s invasion of Ukraine, yet the hacking group maintains that its actions are not politically motivated.

According to Matt Aldridge, principal solutions analyst at Carbonite and Webroot, “these gangs continue to be more inventive with the types of data and businesses they target,” similar to “most modern cyber attacks.”

“Given the high-profile nature of the victim, the hackers may have posted a message releasing Samsung’s data along with a snapshot of its source code in order to gain additional leverage in the event of a ransom demand. However, because the data breach has already occurred and the data has been exfiltrated, no ransom payment can ensure that all copies of the data are securely destroyed “, he said.

Nvidia

Nvidia finally acknowledged the GeForce RTX 3000 series global crisis publicly.

On 5th December 2020 during the Credit Suisse 24th Annual Technology Conference, Colette Kress, CFO of Nvidia acknowledged the global shortage of the RTX 3000 series. He went into some details about the reasons for the shortage as there have been several reports on what is causing such a crisis. Colette revealed in front of the media that wafer yields are not the only reason behind the global shortage.

During the webcast of the conference, Colette also mentioned that the production yield of Samsung 8nm nodes is one of the prime reasons for the decelerated rate of 3000 series production. Before this conference, Nvidia didn’t put any official statement acknowledge the global shortage of the RTX 3000 series.

Reason behind Nvidia GeForce RTX 3000

Colette Kress has given proper statements in the webcast regarding the shortage of the RTX 3000 series. Apart from revealing that the yield of Samsung 8nm nodes is one of the reasons for the shortage, other topics on the same group have also come up. Kress further said that the company is currently facing supply constraints that go beyond just the wafers and silicon supply. Some constraints are also in substrates and components but no details were discussed. The company will continue to work in this crisis and it believes that the demand will exceed supply in Q4 for overall gaming.

Nvidia
Image Source: indianexpress.com

From what Colette said in the webcast it is clear that the low production rate of various components is one of the reasons the production rate of GPU has gone down. Till now, Samsung was the sole provider of Nvidia GeForce RTX 3000 series 8nm nodes. But, this is also having a negative effect slowing down the production of Nvidia. So, the company is looking for other options and most probably will get nodes from TSMC as well. It is better to keep more than one option in hand in a time of crisis.

Samsung or TSMC nodes 

Many rumors and reports have spread earlier regarding Nvidia’s preference for using Samsung or TSMC’s nodes. But, it was only a few months ago that Nvidia confirmed the use of the Samsung 8N node for building Ampere second-gen RTX cards. The company said that it is replacing the TSMC 7nm node which was used for the initial Ampere GA100 GPU. Jensen Huang, CEO of Nvidia then announced that for the majority of its GPUs they will be using TSMC tech and for a small subset of graphics silicon production they have chosen Samsung.

On 1st September 2020, in GeForce Special event, Jensen Huang announced that the company decided to use Samsung’s 8N process for all the large Ampere RTX GPUs. Many reports said that with smaller Samsung-built process nodes, the new generation of GeForce chips will offer a huge amount of rasterizing performance. But, with the shortage of production, Nvidia probably will switch to TSMC again.

Why the shortage is taking place?

According to some experts, the reason behind the shortage is difficulties faced in logistics and supply chain. A report by Tom’s Hardware explains that most of the cargo space in the distribution system is taken up by the pharmaceutical supplies which have caused the shortage of supplies for the electronic division. We are in the middle of a pandemic and most of the attention is going to the pharmaceutical needs and their fastest transportation. So, this can be a very important reason why the delay has been caused in yield and production rate. Moreover, the COVId-19 vaccines are also getting transferred to so many places across the world. And, currently, that is the prime concern of every nation.

The demand for the GPU has already exceeded the supply. And, you might have also noticed that the stocks are vanishing in no time. Even if someone is lucky to get hold of the RTX 3000 series they are receiving an unboxed version.

Will it get better soon?

Kress mentioned that the situation will improve within a couple of months but still it is difficult to quantify what is the exact portion of the demand they can meet. The company will update further on this situation by the end of this quarter. Till then we just have to sit tight and wait when these GPUs will be in stock again. The pandemic has affected every field in the industrial sector and we have to overcome it.