(111)
|
It is appropriate to establish a methodology for the classification of
general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks
result from particularly high capabilities, a general-purpose AI model should be considered
to present systemic risks if it has high-impact capabilities, evaluated on the basis of
appropriate technical tools and methodologies, or significant impact on the internal market due
to its reach. High-impact capabilities in general-purpose AI models means capabilities that
match or exceed the capabilities recorded in the most advanced general-purpose AI models. The
full range of capabilities in a model could be better understood after its placing on the
market or when deployers interact with the model. According to the state of the art at the time
of entry into force of this Regulation, the cumulative amount of computation used for the
training of the general-purpose AI model measured in floating point operations is one of the
relevant approximations for model capabilities. The cumulative amount of computation used for
training includes the computation used across the activities and methods that are intended to
enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data
generation and fine-tuning. Therefore, an initial threshold of floating point operations should
be set, which, if met by a general-purpose AI model, leads to a presumption that the
model is a general-purpose AI model with systemic risks. This threshold should be adjusted
over time to reflect technological and industrial changes, such as algorithmic improvements or
increased hardware efficiency, and should be supplemented with benchmarks and indicators for
model capability. To inform this, the AI Office should engage with the scientific community,
industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the
assessment of high-impact capabilities, should be strong predictors of generality, its
capabilities and associated systemic risk of general-purpose AI models, and could take into
account the way the model will be placed on the market or the number of users it may affect. To
complement this system, there should be a possibility for the Commission to take individual
decisions designating a general-purpose AI model as a general-purpose AI model with
systemic risk if it is found that such model has capabilities or an impact equivalent to those
captured by the set threshold. That decision should be taken on the basis of an overall
assessment of the criteria for the designation of a general-purpose AI model with systemic
risk set out in an annex to this Regulation, such as quality or size of the training data set,
number of business and end users, its input and output modalities, its level of autonomy and
scalability, or the tools it has access to. Upon a reasoned request of a provider
whose model has been designated as a general-purpose AI model with systemic risk, the
Commission should take the request into account and may decide to reassess whether the
general-purpose AI model can still be considered to present systemic risks.
|