Skip links

Summarizing the Critical Components of the AI Supply Chain

Everyone will play a role in the evolution of AI > Be sure to understand the basics.  

The ‘AI supply chain’ that links raw information to valuable output derived from that information includes a handful of components; each critical to the execution and delivery of synthetic insight – what is today known as Artificial Intelligence (AI).  Without each of these components, the processes that comprise the creation of AI break down and result in one of two possible outcomes: (1) degraded and unreliable outputs, and/or (2) a total inability to create or receive the outputs that constitute AI.

Their significance begs several initial questions: What are they?  Why/how are they so significant?  What are their vulnerabilities?

 

Data

Raw data is the lifeblood of an AI system.  It is comprised of various bits of information that, when circulated, can deliver resources that enable the production of an analytical insight that, ultimately, supply AI models with the material necessary to achieve a functioning AI tool.

And like blood, the quality of raw data impacts how well the model(s) it supplies function.  Errors, gaps, inconsistently applied capture criteria, and syntax efficiency are just a few of the many ‘toxins’ that can reduce the quality of raw data and result in degradation of AI model performance.  High quality data sets will be accurate, complete, and practically comprehensive.

 

Models

AI models are the underlying mathematical algorithms that receive and ‘consume’ raw data.   After a model is initially written or created, it undergoes a ‘training’ process to determine capability and accuracy.  During this process, the model uses the raw data to begin generating predictions and testing their accuracy.  The model is then fed more data and the cycle repeats until a desired accuracy level has been achieved.

The vulnerabilities associated with a given AI model will vary to some degree.  Common vulnerabilities such as security threats and availability of technical support for the model’s underlying technology stack are, to a large extent, ‘common’.  Other aspects or characteristics of an AI model though will be more model specific.  Data quality tolerance, resource demand efficiency, and adaptability are all examples of model specific variables that impact overall vulnerability.

 

Computing Power

Processing power is arguably the most significant factor defining the rate and extent of AI capability development today.  As AI models become more sophisticated, the demand for processing power capable of handling the algorithmic and data volume complexity obviously also increases.  The distribution of future demand for various subsets of processors is less obvious however and an aspect of both the chip supply and overall AI capability development rate conversations.

CPUs are typically used for general, ‘single channel’, processing applications.  Graphics Processing Units (GPUs) are optimized for processes that draw on or require the execution of multiple threads of calculations in parallel, ‘multi-channel’.  Field Programmable Gate Arrays (FPGAs) can, as the name implies, be configured post production by the user to optimize its performance for a specific use-case.  An Application Specific Integrated Circuit (ASIC) is optimized for a specific application (like an FPGA) but through a customization process that takes place during the initial manufacturing process (unlike an FPGA). These are just several of the many specific types of processors that are collectively being used to execute AI models.

From a manufacturing capacity distribution standpoint, the global market is relatively straight forward:  South Korea = 28%, Taiwan = 22%, Japan = 16%, China = 12%, North America = 11%, Europe = 3%, ROW = 7%[1]  However when forecasted demand assessments are paired with supply expectations, it becomes clear that there is a significant production shortfall that risks stunting future growth and development[2].

 

Workforce

Appropriately, if also ironically, the role that human know-how plays in the development and delivery of AI capabilities sits at the core of the entire endeavor (for the meantime?).

Computer and information research scientists, including artificial intelligence specialists, invent and design new approaches to computing technology and find innovative uses for existing technology.  Semiconductor engineers are needed to ensure that the semiconductors can be designed and manufactured in a manner that optimizes them for use in AI applications.  Software engineers write the code and formulas that embody AI.  They are responsible for leveraging the hardware and scientific principals created by their peers to derive ‘tangible’ capability; and to then iterate until those capabilities have been optimized relative to current hardware and theoretical limitations.

There is certainly also a layer in the AI workforce that will emerge more clearly as the rate of adoption of AI tools accelerates, but that for now may best be called ‘Integrators’.  This will likely include those tasked with: use case v. model matching, those that will be utilizing or overseeing the operation of the AI tool, those responsible for ensuring the end user has appropriate processing and hardware capabilities, and those responsible for ensuring the security of the tool and any IS that it may be connected to.  Across every sector in the United States for which there is data (with the exception of agriculture, forestry, fishing, and hunting), the number of AI related job postings has increased on average from 1.7% in 2021 to 1.9% in 2022[3]. That growth may be subtle on paper but the take away remains: employers in the United States are increasingly looking for workers with AI related skills.

The dynamics that influence the supply and proficiency of the AI workforce are not unique to the AI supply chain.  Investment in the academic infrastructure that cultivates and produces proficient young professionals (i.e. STEM followed by more specific programs of study aligned with each discipline) is a key part of protecting and growing the AI workforce.  The regionality of the job markets for these roles shouldn’t change significantly in the sense that roles aligned with manufacturing will remain ‘local’ opportunities (albeit competing for talent on a national or even international basis), while roles aligned with integration for example will be less local (but also competing for talent on a national and international basis) due to the travel/limited duration engagement nature of the work.

The relationship between microprocessor manufacturing investment and migration and growth in labor roles aligned to those activities is obvious and can help shape the landscape of AI supply chain activity locations and needs across the board.

 

Capital

Despite the significant attention is receiving of late, global private investment in AI declined in 2022 for the first time in more than a decade – and somewhat dramatically.  Global private investment in AI for ’22 = $91.9B; down 26.7% from ’21[4].  The number of funding events and number of newly funded AI companies in ’22 fell consequently as well.  For context, total U.S. private equity investment across all sectors was down 30% over the same period; and largely attributed to a combination of macroeconomic turbulence, challenging debt markets, and global geopolitical uncertainty[5].

Time will tell what impact this pullback in investment has on the arrival of anticipated capabilities.  There is certainly the potential for a reshuffling of the leaderboard at least in terms of the proximity of competitors to one another from a capability perspective.

To some extent it also appears that investors continue to characterize the promise of AI solutions as ‘speculative’.  With operational efficiency being one of the most frequently touted benefits of AI, its potential as a cost reduction and margin growth tool would seemingly be even more valuable in the current economic environment.

 

Industry’s Evolving View on Role of Domain Collaboration

It is rare – but not unheard of – for players in any emerging technology market to freely share their core investment work product with their competitors, industry, or community.  Protecting the ‘first mover advantage’ for as long as possible is generally one of the most emphasized aspects of a successful start-up plan.  Since its inception however, the AI community has rejected the idea and done just that by freely sharing code libraries, data sets, internally conducted research papers, and even pre-trained models.   All available for download and accessible in various online technology collaboration ‘hub’ sites and forums.

This approach has been beneficial in a variety of ways.  It fostered and accelerated the development of a broad base of SMEs.  It improved quality control and technical issue solving by leveraging scrutiny at scale.  It enabled and sustained a low barrier to entry for the field of AI generally.

Increasingly though, it appears that leaders in AI thought and capability development may be reversing course here.  For example, the creators of the popular ‘GPT-#’ AI tools, OpenAI, initially released GPT-2 in open source format but have kept the subsequent versions (GPT-3 and GPT-4) closed.

While frustrating though, the change in approach may not be uncalled for.  As the capabilities of AI have grown rapidly just in the last few years, so to have the risks and perceived risks associated with the large scale use of these now very powerful tools[6].  Industry leaders cite these concerns as the primary basis for taking a more restrictive/closed approach to AI domain collaboration and say that until the potential consequences can be better understood and regulatory frameworks developed, they are trending toward what they view as a safer, more conservative approach in this area.

 

[1] Senate Republican Policy Committee.  Semiconductors: Key to Economic and National Security. April 29, 2021.  Link to Article (external).

[2] Seetharaman, Dotan. 2023.  “The AI Boom Runs on Chips, But It Can’t Get Enough”.  May 29, 2023.  WSJ.

[3] Maslej, et al. (April 2023). “The AI Index 2023 Annual Report.” Stanford Institute for Human-Centered AI.

[4] Maslej, et al. (April 2023). “The AI Index 2023 Annual Report.” Stanford Institute for Human-Centered AI.

[5] 2023 M&A Integration Survey. 2023. “Transact to Transform”. PWC. April 2023.  Link to Report (external).

[6] Chavez. 2023. “An AI Challenge: Balancing Open and Closed Systems”. Center for European Policy Analysis.  May 30, 2023. Link to Article (external).

Leave a comment