这是indexloc提供的服务,不要输入任何密码
Skip to main content

Advertisement

Log in

Sicherheitsanforderungen an KI-Systeme

  • Schwerpunkt
  • Published:
Datenschutz und Datensicherheit - DuD Aims and scope Submit manuscript

Zusammenfassung

Künstliche Intelligenz (KI) bringt neue Sicherheitsherausforderungen mit sich, denen sich sowohl Anbieter als auch Betreiber stellen müssen. Dieser Beitrag gibt einen Überblick über spezifische Schutzziele für KI-Systeme. Es werden potenzielle Risiken erläutert, vor denen KI-Systeme geschützt werden müssen, sowie Maßnahmen, mit denen diese Schutzziele erreicht werden können. Darüber hinaus werden potenzielle Nebenwirkungen diskutiert.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Literatur

  1. ISO/IEC 27000:2018 (E). (2018). Information technology – Security techniques – Information security management systems – Overview and vocabulary. ISO/IEC.

  2. ITU Security in Telecommunications and Information Technology: An overview of issues and the deployment of existing ITU-T Recommendations for secure telecommunications, ITU-T, Geneva (2012) – ITU-T X-800.

  3. OWASP AI Exchange, https://owaspai.org/docs/ai_security_overview/

  4. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, & Colin Raffel (2021). Extracting Training Data from Large Language Models. In 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021 (pp. 2633–2650). USENIX Association.

  5. Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, & Katherine Lee (2023). Scalable Extraction of Training Data from (Production) Language Models. CoRR, abs/2311.17035.

  6. Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, & Florian Tramèr (2024). Stealing Part of a Production Language Model. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net.

  7. Yash Sharma, & Pin-Yu Chen (2018). Bypassing Feature Squeezing by Increasing Adversary Strength. CoRR, abs/1803.09868.

  8. Shreya Goyal, Sumanth Doddapaneni, Mitesh M. Khapra, & Balaraman Ravindran (2023). A Survey of Adversarial Defenses and Robustness in NLP. ACM Comput. Surv., 55(14s), 332:1–332:39.

    Article  Google Scholar 

  9. Lu, Z., Hu, H., Huo, S., & Li, S. (2022). Ensemble Learning Methods of Adversarial Attacks and Defenses in Computer Vision: Recent Progress. In 2021 International Conference on Advanced Computing and Endogenous Security (pp. 1-10).

  10. Sumanth Dathathri, Abigail See, Sumedh Ghaisas, Po-Sen Huang, Rob McAdam, Johannes Welbl, Vandana Bachani, Alex Kaskasoli, Robert Stanforth, Tatiana Matejovicova, Jamie Hayes, Nidhi Vyas, Majd Al Merey, Jonah Brown-Cohen, Rudy Bunel, Borja Balle, A. Taylan Cemgil, Zahra Ahmed, Kitty Stacpoole, Ilia Shumailov, Ciprian Baetu, Sven Gowal, Demis Hassabis, & Pushmeet Kohli (2024). Scalable watermarking for identifying large language model outputs. Nature, 634(8035), 818–823.

    Article  Google Scholar 

  11. Xu, W., Evans, D., & Qi, Y. (2018). Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oren Halvani.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Halvani, O., Müller, L. Sicherheitsanforderungen an KI-Systeme. Datenschutz Datensich 49, 302–306 (2025). https://doi.org/10.1007/s11623-025-2092-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11623-025-2092-5