| (115) | Providers of general-purpose AI models with systemic risks should assess and
                        mitigate possible systemic risks. If, despite efforts to identify and prevent risks related to
                        a general-purpose AI model that may present systemic risks, the development or use of the
                        model causes a serious incident, the general-purpose AI model provider should without undue
                        delay keep track of the incident and report any relevant information and possible corrective
                        measures to the Commission and national competent authorities. Furthermore, providers should
                        ensure an adequate level of cybersecurity protection for the model and its physical
                        infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection
                        related to systemic risks associated with malicious use or attacks should duly consider
                        accidental model leakage, unauthorised releases, circumvention of safety measures, and defence
                        against cyberattacks, unauthorised access or model theft. That protection could be facilitated
                        by securing model weights, algorithms, servers, and data sets, such as through operational
                        security measures for information security, specific cybersecurity policies, adequate technical
                        and established solutions, and cyber and physical access controls, appropriate to the relevant
                        circumstances and the risks involved. |