Foundation model

From Wikipedia, the free encyclopedia
(Redirected from Foundation models)

A foundation model is a machine learning or deep learning model that is trained on broad data such that it can be applied across a wide range of use cases.[1] Foundation models have transformed artificial intelligence (AI), powering prominent generative AI applications like ChatGPT.[1] The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) created and popularized the term.[2]

Foundation models are general-purpose technologies that can support a diverse range of use cases. Building foundation models is often highly resource-intensive, with the most expensive models costing hundreds of millions of dollars to pay for the underlying data and compute required.[3] In contrast, adapting an existing foundation model for a specific use case or using it directly is much less expensive.

Early examples of foundation models were language models (LMs) like Google's BERT[4] and OpenAI's "GPT-n" series. Beyond text, foundation models have been developed across a range of modalities—including DALL-E and Flamingo[5] for images, MusicGen[6] for music, and RT-2[7] for robotic control. Foundation models constitute a broad shift in AI development: foundation models are being built for astronomy,[8] radiology,[9] genomics,[10] music,[11] coding,[12] times-series forecasting,[13] and mathematics.[14]

Definitions[edit]

The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021 to mean "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks".[15] This was based on their observation that preexisting terms, while overlapping, were not adequate, stating that "'(large) language model' was too narrow given [the] focus is not only language; 'self-supervised model' was too specific to the training objective; and 'pretrained model' suggested that the noteworthy action all happened after 'pretraining."[16] The term “foundation model” was chosen over “foundational model”[17] because “foundational” implies that these models provide fundamental principles in a way that “foundation” does not.[18] After considering many terms, they settled on "foundation model" to emphasize the intended function (i.e., amenability to subsequent further development) rather than modality, architecture, or implementation.

As governments regulate foundation models, new legal definitions have emerged.

  • In the United States, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence defines a foundation model as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts”.[19]
  • In the United States, the proposed AI Foundation Model Transparency Act of 2023[20] by House Representatives Don Beyer (D, VA) and Anna Eshoo (D, CA) defines a foundation model as “an artificial intelligence model trained on broad data, generally uses self supervision, generally contains at least 1,000,000,000 parameters, is applicable across a wide range of contexts, and exhibits, or could be easily modified to exhibit, high levels of performance at tasks that could pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”
  • In the European Union, the European Parliament’s negotiated position on the E.U. AI Act defines a foundation model as an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”.
  • In the United Kingdom, the Competition and Markets Authority’s AI Foundation Models: Initial Report [1] defines foundations model as “a type of AI technology that are trained on vast amounts of data that can be adapted to a wide range of tasks and operations.”

Overall, while many of these definitions stick close to the original Stanford definition, they do introduce some subtle distinctions. For example, the U.S. definitions are the sole definitions to make reference to the size of a foundation model, though they differ on an exact magnitude. Beyer and Eshoo's definition also specifies that foundation models must achieve a level of performance as to be a potential danger. In contrast, the E.U. definition includes mention of whether the model is designed for generality of output. Nonetheless, all definitions share that foundation models must be trained on a broad range of data with potential applications in many domains.

History[edit]

Technologically, foundation models are built using established machine learning techniques like deep neural networks, transfer learning, and self-supervised learning. Foundation models are noteworthy given the unprecedented resource investment, model and data size, and ultimately their scope of application when compared to previous forms of AI. The rise of foundation models constitutes a new paradigm in AI, where general-purpose models function as a reusable infrastructure, instead of bespoke and one-off task-specific models.

Foundation models draw upon a series of advances in the history of AI. These models can be situated against the backdrop of the broader rise of machine learning since the 1990s. Prior AI models depended on specific instructions to solve a given task, but machine learning-powered models were able to decipher what task to solve given sufficient data. Such a shift from so-called expert systems to data-driven machine learning was the first step towards the modern foundation model.

The next major step was the advent of deep learning circa 2010.[21] With larger datasets and more advanced neural networks, AI models were able to achieve higher levels of performance. The first major instance of deep learning was exhibited by the model architecture AlexNet, which won the 2012 ImageNet Large Scale Visual Recognition Challenge. AlexNet exhibited strong performance on a large-scale general dataset, and first proved that deep learning was possible. Alongside the methodological shift to end-to-end optimization of deep neural networks, the 2010s was also marked by a software shift. In the mid 2010s, the rise of deep learning frameworks like Pytorch and Tensorflow provided crucial infrastructure for simplifying and scaling deep learning pipelines.

Foundation models began to materialize as the latest wave of deep learning models in the late 2010s with models like ELMo, GPT, BERT and GPT-2.[21] Relative to most prior work on deep learning, these language models demonstrated the potential of training on much large web-sourced datasets using self-supervised objectives (e.g. predicting the next word in a large corpus of text). These approaches, which draw upon earlier works like word2vec and GloVe, deviated from prior supervised approaches that required annotated data (e.g. crowd-sources labels).

Overall, the computational advances in specialized hardware and parallelism (e.g., large clusters of NVIDIA GPUs), new developments in neural network architecture (e.g., the Transformer), and the increased use of training data with minimal supervision all contributed to the rise of foundation models. Some noteworthy foundation models include: GPT, BERT, GPT-2, T5, GPT-3, CLIP, DALL-E, Stable Diffusion, GPT-4, LLaMA, LLaMA 2, and Mistral. Each of these models came with its own unique abilities, particularly in their strong generative capabilities.

In particular, 2022 was particularly influential in the history of foundation models. The releases of Stable Diffusion and ChatGPT (initially powered by the GPT-3.5 model) led to foundation models and generative AI entering widespread public discourse. Further, releases of LLaMA, Llama 2, and Mistral in 2023 contributed to a greater emphasis placed on how foundation models are released with open foundation models garnering a lot of support[22] and scrutiny.[23]

Related concepts[edit]

Frontier models[edit]

Certain highly advanced foundation models are termed “frontier models,” which have the potential to “possess dangerous capabilities sufficient to pose severe risks to public safety.”[24] These “dangerous capabilities” stem from the accidental or intentional misuse of such models, which in conjunction with their powerful nature can lead to severe harms. As foundation models continue to improve, some AI researchers speculate that almost all next-generation foundation models will be considered frontier models.

Since the concept of dangerous capabilities is inherently subjective, there is no strict designation for what foundation models qualify as frontier models. However, some generally held ideas for sufficiently dangerous capabilities include:

  • Designing and synthesizing new biological or chemical weapons[25]
  • Producing and propagating convincing, tailored disinformation with minimal user instruction[26]
  • Harnessing unprecedented offensive cyber capabilities[27]
  • Evading human control through deceptive means[28]

Due to frontier models’ unique capabilities, it is difficult to effectively regulate their development and deployment. Because of their emergent nature, new dangerous capabilities can appear on their own in frontier models, both in the development stage and after being deployed.[24] Additionally, since frontier models continue to adapt after deployment, it remains difficult to mitigate all harms that arise from already-deployed models. If a frontier model happens to be open-source or is released online, the model can also disseminate rapidly, further hampering regulators by creating a lack of accountability.

General-purpose AI[edit]

Due to their adaptability to a wide range of use-cases, foundation models are sometimes considered to be examples of general-purpose AI. In designing the EU AI Act, the European Parliament has stated that a new wave of general-purpose AI technologies shapes the overall AI ecosystem.[29] The fuller structure of the ecosystem, in addition to the properties of specific general-purpose AI systems, influences the design of AI policy and research.[30] General-purpose AI systems also often appear in people’s everyday lives through applications and tools like ChatGPT or DALL-E.

Government agencies like EU Parliament have identified regulation general-purpose AI, such as foundation models, to be a high priority. General-purpose AI systems are often characterized by large size, opacity, and potential for emergence, all of which can create unintended harms. Such systems also heavily influence downstream applications, which further exacerbates the need for regulation. In regards to prominent legislation, a number of stakeholders have pushed for the EU AI Act to include restrictions on general-purpose AI systems, all of which would also apply to foundation models.

Technical details[edit]

Modeling[edit]

For a foundation model to effectively generalize, it must acquire rich representations of the training data. As a result, expressive model architectures that efficiently process large-scale data are often preferred in building foundation models.[15] Currently, the Transformer architecture is the de facto choice for building foundation models across a range of modalities.[31]

Training[edit]

Foundation models are built by optimizing a training objective(s), which is a mathematical function that determines how model parameters are updated based  on model predictions on training data.[32] Language models are often trained with a next-tokens prediction objective, which refers to the extent at which the model is able to predict the next token in a sequence. Image models are commonly trained with contrastive learning or diffusion training objectives. For contrastive learning, images are randomly augmented before being evaluated on the resulting similarity of the model’s representations. For diffusion models, images are noised and the model learns to gradually de-noise via the objective. Multimodal training objectives also exist, with some separating images and text during training, while others examine them concurrently.[33] In general, the training objectives for foundation models promote the learning of broadly useful representations of data.

With the rise of foundation models and the larger datasets that power them, a training objective must be able to parse through internet-scale data for meaningful data points. Additionally, since foundation models are designed to solve a general range of tasks, training objectives ought to be domain complete, or able to solve a broad set of downstream capabilities within the given domain. Lastly, foundation model training objectives should seek to scale well and be computationally efficient. With model size and compute power both being relevant constraints, a training objective must be able to overcome such bottlenecks.

Data[edit]

Foundation models are trained on a large quantity of data, working under the maxim “the more data, the better.”[34] Performance evaluation does show that more data generally leads to better performance, but other issues arise as data quantity grows. Tasks like managing the dataset, integrating data across new applications, ensuring adherence to data licenses, and maintaining data quality all become more difficult as data size grows. The specific demands of foundation models have only exacerbated such issues, as it remains the norm for large foundation models to use public web-scraped data. Public web data remains a plentiful resource, but it also demands stringent moderation and data processing from foundation model developers before it can be successfully integrated into the training pipeline.[35]

Training foundation models often runs the risk of violating user privacy, as private data can be disclosed, collected, or used in ways beyond the stated scope. Even if no private data is leaked, models can still inadvertently compromise security through learned behavior in the resulting foundation model.[36] Data quality is another key point, as web-scraped data frequently contains biased, duplicate, and toxic material. Once foundation models are deployed, ensuring high-quality data is still an issue, as undesirable behavior can still emerge from small subsets of data.

Systems[edit]

The size of foundation models also brings about issues with the computer systems they run on. The average foundation model is too large to be run within a single accelerator’s memory and the initial training process requires an expensive amount of resources.[37] Such issues are predicted to further exacerbate in future as foundation models grow to new heights. Due to this constraint, researchers have begun looking into compressing model size through tight model inference.

GPUs are the most common choice of compute hardware for machine learning, due to high memory storage and strong power. Typical foundation model training requires many GPUs, all connected in parallel with fast interconnects. Acquiring a sufficient amount of GPUs of requisite compute efficiency is a challenge for many foundation model developers, one that has led to an increasing dilemma in the field. Larger models require greater compute power, but often at the cost of improved compute efficiency. Since training remains time-consuming and expensive, the tradeoff between compute power and compute efficiency has led only a few select companies to afford the production costs for large, state of the art foundation models. Some techniques like compression and distillation can make inference more affordable, but they fail to completely shore up this weakness.

Scaling[edit]

The accuracy and capabilities of foundation models often scale predictably with the size of the model and the amount of the training data. Specifically, scaling laws have been discovered, which are data-based empirical trends that relate resources (data, model size, compute usage) to model capabilities. Particularly, a model’s scale is defined by compute, dataset size, and the number of parameters, all of which exhibit a power-law relationship with end performance.

However, broken scaling laws[38] have been discovered in which this relationship smoothly transitions (at points referred to as break(s)) from a power law with one exponent to a power law with another (different) exponent. When one does not collect any points near (or after) the break(s), it can be difficult to obtain an accurate extrapolation.

Adaptation[edit]

Foundation models are inherently multi-purpose: to use these model for a specific use case requires some form of adaptation. At a minimum, models need to be adapted to perform the task of interest (task specification), but often better performance can be achieved by more extensive adaptation to the domain of interest (domain specialization).

A variety of methods (e.g. prompting, in-context learning, fine-tuning, LoRA) provide different tradeoffs between the costs of adaptation and the extent to which models are specialized. Some major facets to consider when adapting a foundation model are compute budget and data availability. Foundation models can be very large, up to trillions of parameters in size, so adapting the entirety of a foundation model can be computationally expensive. Therefore, developers sometimes adapt only the last neural layer or only the bias vectors to save time and space.[39] For particularly niche applications, specific data may also not be available to adapt the foundation model sufficiently. In such circumstances, data must be manually labeled, which is costly and can demand expert knowledge.

Evaluation[edit]

Evaluation is a key part of developing foundation models. Not only does evaluation allow for tracking progress of high-performance models, it also creates benchmarks for future model development. Stakeholders rely on evaluations to understand model behaviors and gain insight into their various attributes. Traditionally, foundation models are evaluated relative to each other through standardized task benchmarks like MMLU,[40] MMMU,[41] HumanEval,[42] and GSM8K.[43] Given that foundation models are multi-purpose, increasingly meta-benchmarks are developed that aggregate different underlying benchmarks. Examples include LM-Harness,[44] BIG-Bench,[45] HELM,[46] OpenLLM Leaderboard,[47] DecodingTrust,[48] and HEIM.[49]

Since foundation models’ utility depends on their own general capabilities and the performance of fine-tuned applications, evaluation must cover both metrics. Proper evaluation examines both a foundation model’s downstream applications in aggregate and the direct properties the foundation model holds. To ensure further equity in evaluation, certain existing evaluation frameworks account for all adaptation resources, which leads to more informed analyses for the benefit of all stakeholders.[50]

Supply chain[edit]

Foundation models’ general capabilities allow them to fulfill a unique role in the AI ecosystem,[51] fueled by many upstream and downstream technologies.[1] Training a foundation model requires several resources (e.g. data, compute, labor, hardware, code), with foundation models often involving immense amounts of data and compute (also referred to as computational power). Due to foundation models’ large development costs and inexpensive adaptation requirements, the AI landscape has shifted to a small subset of AI companies making foundation models for downstream adaptation.[52] Thus, most foundation model companies outsource this step to specialized data providers (e.g. Scale AI,[53] Surge[54]) and compute providers (e.g. Amazon Web Services, Google Cloud, Microsoft Azure).

The foundation model developer itself will then take the data and use the supplied compute to actually train the foundation model. After the foundation model is completely built, much of the data and labor requirements abate. In this development process, hardware and compute are the most necessary, and also the most exclusive resources. To train larger and more complex AI, a sufficient amount of compute is key. However, compute is consolidated in the hands of a few, select entities, which most foundation model developers depend on. As such, the foundation model pipeline is concentrated heavily around these providers. Compute is also costly; in 2023, AI companies spent more than 80% of total capital on compute resources.[55]

Foundation models require a large amount of general data to power their capabilities. Early foundation models scraped from subsets of the internet to provide this data information. As the size and scope of foundation models grows, larger quantities of internet scraping becomes necessary, resulting in higher likelihoods of biased or toxic data. This toxic or biased data can disproportionately harm marginalized groups and exacerbate existing prejudices.[56]

To address this issue of low-quality data that arose with unsupervised training, some foundation model developers have turned to manual filtering. This practice, known as data labor, comes with its own host of issues.[57] Such manual data detoxification is often outsourced to reduce labor costs, with some workers making less than $2 per hour.[58]

The foundation model will then be hosted online either via the developer or via an external organization. Once released, other parties can create applications based on the foundation model, whether through fine-tuning or wholly new purposes. People can then access these applications to serve their various means, allowing one foundation model to power and reach a wide audience.

Release strategies[edit]

After a foundation model is built, it can be released in one of many ways. There are many facets to a release: the asset itself, who has access, how access changes over time, and the conditions on use.[59] All these factors contribute to how a foundation model will affect downstream applications.[60] In particular, the two most common forms of foundation model release are through APIs and direct model downloads.

When a model is released via an API, users can query the model and receive responses, but cannot directly access the model itself. Comparatively, the model could be directly downloadable for users to access and modify. Both release strategies are often classified as an open release. The exact definition of an open release is disputed, but widely accepted requirements are provided by the Open Source Initiative.

Some open foundation models are: GPT-4, PaLM 2, Llama 2, and Mistral. While open foundation models can further research and development more easily, they are also more susceptible to misuse. Open foundation models can be downloaded by anyone, and particularly powerful models can be fine-tuned to intentionally or unintentionally cause harm.

During a closed release, the foundation model cannot be accessed by the public, but is used internally by an organization. Such releases are considered safer, but offer no additional value to the research community or the public at large.

Some foundation models like Google DeepMind’s Flamingo[61] are fully closed, meaning they are available only to the model developer; others, such as OpenAI’s GPT-4, are limited access, available to the public but only as a black box; and still others, such as Meta’s Llama 2 are open, with broadly available model weights enabling downstream modification and scrutiny.

References[edit]

  1. ^ a b c d Competition and Markets Authority (2023). AI Foundation Models: Initial Report. Available at: https://assets.publishing.service.gov.uk/media/65081d3aa41cc300145612c0/Full_report_.pdf
  2. ^ "Introducing the Center for Research on Foundation Models (CRFM)". Stanford HAI. 18 August 2021. Retrieved 11 June 2022.
  3. ^ Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
  4. ^ Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What we know about how BERT works". arXiv:2002.12327 [cs.CL].
  5. ^ Tackling multiple tasks with a single visual language model, 28 April 2022, retrieved 13 June 2022
  6. ^ Copet, Jade; Kreuk, Felix; Gat, Itai; Remez, Tal; Kant, David; Synnaeve, Gabriel; Adi, Yossi; Défossez, Alexandre (7 November 2023). "Simple and Controllable Music Generation". arXiv:2306.05284 [cs.SD].
  7. ^ "Speaking robot: Our new AI model translates vision and language into robotic actions". Google. 28 July 2023. Retrieved 11 December 2023.
  8. ^ Nguyen, Tuan Dung; Ting, Yuan-Sen; Ciucă, Ioana; O'Neill, Charlie; Sun, Ze-Chang; Jabłońska, Maja; Kruk, Sandor; Perkowski, Ernest; Miller, Jack (12 September 2023). "AstroLLaMA: Towards Specialized Foundation Models in Astronomy". arXiv:2309.06126 [astro-ph.IM].
  9. ^ Tu, Tao; Azizi, Shekoofeh; Driess, Danny; Schaekermann, Mike; Amin, Mohamed; Chang, Pi-Chuan; Carroll, Andrew; Lau, Chuck; Tanno, Ryutaro (26 July 2023). "Towards Generalist Biomedical AI". arXiv:2307.14334 [cs.CL].
  10. ^ Zvyagin, Maxim; Brace, Alexander; Hippe, Kyle; Deng, Yuntian; Zhang, Bin; Bohorquez, Cindy Orozco; Clyde, Austin; Kale, Bharat; Perez-Rivera, Danilo (11 October 2022). "GenSLMs: Genome-scale language models reveal SARS-CoV-2 evolutionary dynamics". bioRxiv 10.1101/2022.10.10.511571.
  11. ^ Engineering, Spotify (13 October 2023). "LLark: A Multimodal Foundation Model for Music". Spotify Research. Retrieved 11 December 2023.
  12. ^ Li, Raymond; Allal, Loubna Ben; Zi, Yangtian; Muennighoff, Niklas; Kocetkov, Denis; Mou, Chenghao; Marone, Marc; Akiki, Christopher; Li, Jia (9 May 2023). "StarCoder: may the source be with you!". arXiv:2305.06161 [cs.CL].
  13. ^ Se, Ksenia; Spektor, Ian (5 April 2024). "Revolutionizing Time Series Forecasting: Interview with TimeGPT's creators". Turing Post. Retrieved 11 April 2024.
  14. ^ Azerbayev, Zhangir; Schoelkopf, Hailey; Paster, Keiran; Santos, Marco Dos; McAleer, Stephen; Jiang, Albert Q.; Deng, Jia; Biderman, Stella; Welleck, Sean (30 November 2023). "Llemma: An Open Language Model For Mathematics". arXiv:2310.10631 [cs.CL].
  15. ^ a b Bommasani, Rishi; et al. (18 August 2021). On the Opportunities and Risks of Foundation Models (Report). arXiv:2108.07258.
  16. ^ "Reflections on Foundation Models". Stanford HAI. 18 October 2021. Retrieved 22 May 2023.
  17. ^ Bommasani, Rishi; Liang, Percy (18 October 2021). "Reflections on Foundation Models". Stanford CRFM. Retrieved 11 December 2023.
  18. ^ Marcus, Gary (11 September 2021). "Has AI found a new Foundation?". The Gradient. Retrieved 11 December 2023.
  19. ^ House, The White (30 October 2023). "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence". The White House. Retrieved 12 February 2024.
  20. ^ "AI Foundation Model Transparency Act" (PDF).
  21. ^ a b Liang, Percy; Bommasani, Rishi; Lee, Tony; Tsipras, Dimitris; Soylu, Dilara; Yasunaga, Michihiro; Zhang, Yian; Narayanan, Deepak; Wu, Yuhuai (1 October 2023), Holistic Evaluation of Language Models, arXiv:2211.09110, retrieved 13 February 2024
  22. ^ "Joint Statement on AI Safety and Openness". Mozilla. 31 October 2023. Retrieved 12 February 2024.
  23. ^ "Hawley and Blumenthal Demand Answers from Meta, Warn of Misuse After 'Leak' of Meta's AI Model". Senator Josh Hawley. 6 June 2023. Retrieved 12 February 2024.
  24. ^ a b Anderljung, Markus; Barnhart, Joslyn; Korinek, Anton; Leung, Jade; O'Keefe, Cullen; Whittlestone, Jess; Avin, Shahar; Brundage, Miles; Bullock, Justin (7 November 2023), Frontier AI Regulation: Managing Emerging Risks to Public Safety, arXiv:2307.03718, retrieved 12 February 2024
  25. ^ Singhal, Karan; Azizi, Shekoofeh; Tu, Tao; Mahdavi, S. Sara; Wei, Jason; Chung, Hyung Won; Scales, Nathan; Tanwani, Ajay; Cole-Lewis, Heather; Pfohl, Stephen; Payne, Perry; Seneviratne, Martin; Gamble, Paul; Kelly, Chris; Babiker, Abubakr (August 2023). "Large language models encode clinical knowledge". Nature. 620 (7972): 172–180. arXiv:2212.13138. doi:10.1038/s41586-023-06291-2. ISSN 1476-4687. PMID 37438534.
  26. ^ Nori, Harsha; King, Nicholas; McKinney, Scott Mayer; Carignan, Dean; Horvitz, Eric (12 April 2023), Capabilities of GPT-4 on Medical Challenge Problems, arXiv:2303.13375, retrieved 12 February 2024
  27. ^ Simshaw, Drew (22 April 2022). "Access to A.I. Justice: Avoiding an Inequitable Two-Tiered System of Legal Services". SSRN Electronic Journal.
  28. ^ Arbel, Yonathan A.; Becher, Shmuel I. (2020). "Contracts in the Age of Smart Readers". SSRN Electronic Journal. doi:10.2139/ssrn.3740356. S2CID 229386991.
  29. ^ "General-purpose artificial intelligence | Think Tank | European Parliament". www.europarl.europa.eu. Retrieved 12 February 2024.
  30. ^ Bommasani, Rishi; Soylu, Dilara; Liao, Thomas I.; Creel, Kathleen A.; Liang, Percy (28 March 2023), Ecosystem Graphs: The Social Footprint of Foundation Models, arXiv:2303.15772, retrieved 12 February 2024
  31. ^ Bommasani, Rishi; Klyman, Kevin; Longpre, Shayne; Kapoor, Sayash; Maslej, Nestor; Xiong, Betty; Zhang, Daniel; Liang, Percy (19 October 2023), The Foundation Model Transparency Index, arXiv:2310.12941, retrieved 12 February 2024
  32. ^ Claude Elwood, Shannon (July 1948). "A Mathematical Theory of Communication" (PDF). Bell System Technical Journal.
  33. ^ Radford, Alec; Kim, Jong Wook; Hallacy, Chris; Ramesh, Aditya; Goh, Gabriel; Agarwal, Sandhini; Sastry, Girish; Askell, Amanda; Mishkin, Pamela (26 February 2021), Learning Transferable Visual Models From Natural Language Supervision, arXiv:2103.00020, retrieved 13 February 2024
  34. ^ Kaplan, Jared; McCandlish, Sam; Henighan, Tom; Brown, Tom B.; Chess, Benjamin; Child, Rewon; Gray, Scott; Radford, Alec; Wu, Jeffrey (22 January 2020), Scaling Laws for Neural Language Models, arXiv:2001.08361, retrieved 13 February 2024
  35. ^ Jo, Eun Seo; Gebru, Timnit (27 January 2020). "Lessons from archives: Strategies for collecting sociocultural data in machine learning". Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. pp. 306–316. arXiv:1912.10389. doi:10.1145/3351095.3372829. ISBN 978-1-4503-6936-7.
  36. ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (1 March 2021). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7.
  37. ^ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish (22 July 2020), Language Models are Few-Shot Learners, arXiv:2005.14165, retrieved 13 February 2024
  38. ^ Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2022). "Broken Neural Scaling Laws". International Conference on Learning Representations (ICLR), 2023.
  39. ^ Zaken, Elad Ben; Ravfogel, Shauli; Goldberg, Yoav (5 September 2022), BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models, arXiv:2106.10199, retrieved 13 February 2024
  40. ^ "Papers with Code - MMLU Benchmark (Multi-task Language Understanding)". paperswithcode.com. Retrieved 21 April 2024.
  41. ^ Yue, Xiang; Ni, Yuansheng; Zhang, Kai; Zheng, Tianyu; Liu, Ruoqi; Zhang, Ge; Stevens, Samuel; Jiang, Dongfu; Ren, Weiming (20 December 2023), MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI, arXiv:2311.16502, retrieved 13 February 2024
  42. ^ "Papers with Code - HumanEval Benchmark (Code Generation)". paperswithcode.com. Retrieved 21 April 2024.
  43. ^ "Papers with Code - GSM8K Benchmark (Arithmetic Reasoning)". paperswithcode.com. Retrieved 21 April 2024.
  44. ^ EleutherAI/lm-evaluation-harness, EleutherAI, 21 April 2024, retrieved 21 April 2024
  45. ^ Srivastava, Aarohi; Rastogi, Abhinav; Rao, Abhishek; Shoeb, Abu Awal Md; Abid, Abubakar; Fisch, Adam; Brown, Adam R.; Santoro, Adam; Gupta, Aditya (12 June 2023), Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models, arXiv:2206.04615, retrieved 13 February 2024
  46. ^ "Holistic Evaluation of Language Models (HELM)". crfm.stanford.edu. Retrieved 21 April 2024.
  47. ^ "open-llm-leaderboard (Open LLM Leaderboard)". huggingface.co. 9 November 2023. Retrieved 21 April 2024.
  48. ^ "DecodingTrust Benchmark". decodingtrust.github.io. Retrieved 21 April 2024.
  49. ^ "Holistic Evaluation of Image Models (HEIM)". crfm.stanford.edu. Retrieved 21 April 2024.
  50. ^ Linzen, Tal (July 2020). Jurafsky, Dan; Chai, Joyce; Schluter, Natalie; Tetreault, Joel (eds.). "How Can We Accelerate Progress Towards Human-like Linguistic Generalization?". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics: 5210–5217. arXiv:2005.00955. doi:10.18653/v1/2020.acl-main.465.
  51. ^ "Ecosystem Graphs for Foundation Models". crfm.stanford.edu. Retrieved 13 February 2024.
  52. ^ Vipra, Jai; Korinek, Anton (2 November 2023), Market Concentration Implications of Foundation Models, arXiv:2311.01550, retrieved 13 February 2024
  53. ^ "Accelerate the Development of AI Applications | Scale AI". scale.com. Retrieved 21 April 2024.
  54. ^ "Surge AI | World's Most Powerful Data Labeling Platform". www.surgehq.ai. Retrieved 21 April 2024.
  55. ^ pnp (27 September 2023). "Computational Power and AI". AI Now Institute. Retrieved 13 February 2024.
  56. ^ Tiku, Nitasha; Schaul, Kevin; Chen, Szu Yu. "These fake images reveal how AI amplifies our worst stereotypes". Washington Post. Retrieved 13 February 2024.
  57. ^ "How the AI industry profits from catastrophe". MIT Technology Review. Retrieved 13 February 2024.
  58. ^ "Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer". TIME. 18 January 2023. Retrieved 13 February 2024.
  59. ^ Liang, Percy; Bommasani, Rishi; Creel, Kathleen (17 May 2022). "The Time is Now to Develop Community Norms for the Release of Foundation Models". Stanford CRFM.
  60. ^ Solaiman, Irene (5 February 2023), The Gradient of Generative AI Release: Methods and Considerations, arXiv:2302.04844, retrieved 13 February 2024
  61. ^ Alayrac, Jean-Baptiste; Donahue, Jeff; Luc, Pauline; Miech, Antoine; Barr, Iain; Hasson, Yana; Lenc, Karel; Mensch, Arthur; Millican, Katie (15 November 2022), Flamingo: a Visual Language Model for Few-Shot Learning, arXiv:2204.14198, retrieved 13 February 2024