Connect with us

Technology

By the Numbers: Are Tech IPOs Worth the Hype?

Published

on

Are Tech IPOs Worth the Hype?

Can I share this graphic?
Yes. Visualizations are free to share and post in their original form across the web—even for publishers. Please link back to this page and attribute Visual Capitalist.
When do I need a license?
Licenses are required for some commercial uses, translations, or layout modifications. You can even whitelabel our visualizations. Explore your options.
Interested in this piece?
Click here to license this visualization.

Tech IPOs — Hype vs. Reality

Initial Public Offerings (IPOs) generate massive amounts of attention from investors and media alike, especially for new and fast-rising companies in the technology sector.

On the surface, the attention is warranted. Some of the most well-known tech companies have built their profile by going public, including Facebook by raising $16 billion in 2012.

But when you peel away the hype and examine investor returns from tech IPOs more closely, the reality can leave a lot to be desired.

The Hype in Numbers

When it comes to the IPOs of companies beginning to sell shares on public stock exchanges, tech offerings have become synonymous with billion-dollar launches.

Given the sheer magnitude of IPOs based in the technology sector, it’s easy to understand why. Globally, the technology sector has regularly generated the most IPOs and highest proceeds, as shown in a recent report by Ernst & Young.

In 2019 alone, the world’s public markets saw 263 IPOs in the tech sector with total proceeds of $62.8 billion. That’s far ahead of the second-place healthcare sector, which saw 174 IPOs generate proceeds of $22.5 billion.

The discrepancy is more apparent in the U.S., according to data from Renaissance Capital. In fact, over the last five years, the tech sector has accounted for 23% of total U.S. IPOs and 34% of proceeds generated by U.S. IPOs.

tech-IPOs-Supplemental

The prevalence of tech is even more apparent when examining history’s largest IPOs. Of the 25 largest IPOs in U.S. history, 60% come from the technology and communication services sectors.

That list includes last year’s well-publicized IPOs for Uber ($8.1 billion) and Lyft ($2.3 billion), as well as a direct public offering from Slack ($7.4 billion). Soon the list might include Airbnb, which plans to list within the communication services sector instead of tech.

The Reality in Returns

But the proof, as they say, is in the pudding.

Uber and Lyft were two of 2019’s largest U.S. IPOs, but they also saw some of the poorest returns. Uber fell 33.4% from its IPO price at year end, while Lyft was down 35.7%.

And they were far from isolated incidents. Tech IPOs averaged a return of -4.6% last year, far behind the top sectors of consumer staples (led by Beyond Meat) and healthcare.

SectorAvg. IPO Return (2019)
Consumer Staples103.0%
Healthcare35.9%
Financials30.8%
Materials30.4%
Consumer Discretionary14.6%
Industrials6.1%
Energy-0.4%
Technology-4.6%
Utilities-7.8%
Real Estate-9.4%
Communication Services-66.4%

While last year was the first time tech IPOs have averaged a negative return in four years, analysis of the last 10 years confirms that tech IPOs have underperformed over the last decade.

A decade-long analysis from investment firm Janus Henderson demonstrated that U.S. tech IPOs start underperforming compared to the broad tech sector about 5-6 months after launching.

This dip likely corresponds to the expiry of an IPO’s lock-up period—the time that a company’s pre-IPO investors are able to sell their stock. By cashing in on strong early performance, investors flood the market and bring share prices down.

Interestingly, most gains for these IPOs tend to happen within the first day of trading. The median first-day performance for tech IPOs was a 21% increase over the offer price. That’s why the median first-year return for a tech IPO, excluding the first day of trading, is -19% when compared with the broader tech sector.

How to Make Money from Tech IPOs

So does that mean that investors should avoid tech IPOs? Not necessarily.

Longer-term analysis from the University of Florida’s Warrington College of Business shows that U.S. tech IPOs offer better returns than other sectors as long as investors get in at the offer price.

U.S. Tech IPO Returns from Offer Price

SectorAvg. Three-Year Return Market-adjusted Return
Tech77.0%28.3%
Non-Tech34.6%-11.4%

Even when adjusting for the broader market performance, tech IPOs have been solid in comparison to the offer price.

The challenge is that if investors are buying stock after that first day market bump, they may have already missed out on meaningful gains:

U.S. Tech IPO Returns from First Closing Price

SectorAvg. Three-Year Return Market-adjusted Return
Tech46.1%-2.7%
Non-Tech23.7%-22.2%

So should investors shy away from tech IPOs unless they’re able to get in early?

Generally speaking, the analysis holds that new tech companies perform relatively well, but not better than the broader market once they’ve started trading.

However, in a world of billion-dollar unicorns, there are always exceptions to the rule. The University of Florida study found that tech companies with a base of over $100 million in sales before going public saw a market-adjusted three-year return of 24.4% from the first closing price.

If you can sift through the hype and properly analyze the right tech IPO to support, the reality can be rewarding.

Click for Comments

AI

Charted: The Exponential Growth in AI Computation

In eight decades, artificial intelligence has moved from purview of science fiction to reality. Here’s a quick history of AI computation.

Published

on

A cropped version of the time series chart showing the creation of machine learning systems on the x-axis and the amount of AI computation they used on the y-axis measured in FLOPs.

Charted: The Exponential Growth in AI Computation

Electronic computers had barely been around for a decade in the 1940s, before experiments with AI began. Now we have AI models that can write poetry and generate images from textual prompts. But what’s led to such exponential growth in such a short time?

This chart from Our World in Data tracks the history of AI through the amount of computation power used to train an AI model, using data from Epoch AI.

The Three Eras of AI Computation

In the 1950s, American mathematician Claude Shannon trained a robotic mouse called Theseus to navigate a maze and remember its course—the first apparent artificial learning of any kind.

Theseus was built on 40 floating point operations (FLOPs), a unit of measurement used to count the number of basic arithmetic operations (addition, subtraction, multiplication, or division) that a computer or processor can perform in one second.

ℹ️ FLOPs are often used as a metric to measure the computational performance of computer hardware. The higher the FLOP count, the higher computation, the more powerful the system.

Computation power, availability of training data, and algorithms are the three main ingredients to AI progress. And for the first few decades of AI advances, compute, which is the computational power needed to train an AI model, grew according to Moore’s Law.

PeriodEraCompute Doubling
1950–2010Pre-Deep Learning18–24 months
2010–2016Deep Learning5–7 months
2016–2022Large-scale models11 months

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

However, at the start of the Deep Learning Era, heralded by AlexNet (an image recognition AI) in 2012, that doubling timeframe shortened considerably to six months, as researchers invested more in computation and processors.

With the emergence of AlphaGo in 2015—a computer program that beat a human professional Go player—researchers have identified a third era: that of the large-scale AI models whose computation needs dwarf all previous AI systems.

Predicting AI Computation Progress

Looking back at the only the last decade itself, compute has grown so tremendously it’s difficult to comprehend.

For example, the compute used to train Minerva, an AI which can solve complex math problems, is nearly 6 million times that which was used to train AlexNet 10 years ago.

Here’s a list of important AI models through history and the amount of compute used to train them.

AIYearFLOPs
Theseus195040
Perceptron Mark I1957–58695,000
Neocognitron1980228 million
NetTalk198781 billion
TD-Gammon199218 trillion
NPLM20031.1 petaFLOPs
AlexNet2012470 petaFLOPs
AlphaGo20161.9 million petaFLOPs
GPT-32020314 million petaFLOPs
Minerva20222.7 billion petaFLOPs

Note: One petaFLOP = one quadrillion FLOPs. Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

The result of this growth in computation, along with the availability of massive data sets and better algorithms, has yielded a lot of AI progress in seemingly very little time. Now AI doesn’t just match, but also beats human performance in many areas.

It’s difficult to say if the same pace of computation growth will be maintained. Large-scale models require increasingly more compute power to train, and if computation doesn’t continue to ramp up it could slow down progress. Exhausting all the data currently available for training AI models could also impede the development and implementation of new models.

However with all the funding poured into AI recently, perhaps more breakthroughs are around the corner—like matching the computation power of the human brain.

Where Does This Data Come From?

Source: “Compute Trends Across Three Eras of Machine Learning” by Sevilla et. al, 2022.

Note: The time estimated to for computation to double can vary depending on different research attempts, including Amodei and Hernandez (2018) and Lyzhov (2021). This article is based on our source’s findings. Please see their full paper for further details. Furthermore, the authors are cognizant of the framing concerns with deeming an AI model “regular-sized” or “large-sized” and said further research is needed in the area.

Methodology: The authors of the paper used two methods to determine the amount of compute used to train AI Models: counting the number of operations and tracking GPU time. Both approaches have drawbacks, namely: a lack of transparency with training processes and severe complexity as ML models grow.

Continue Reading

Subscribe

Popular