logo CISCO SYSTEMS (Czech Republic) s.r.o.

Blog Cisco Artificial Intelligence

Blog Cisco o umělé inteligenci se zaměřuje na využití AI a strojového učení (ML) k transformaci podnikových sítí, bezpečnosti a spolupráce. Společnost zde komunikuje svůj přístup k důvěryhodné technologii prostřednictvím šesti principů pro odpovědnou AI - transparentnost, spravedlnost, odpovědnost, soukromí, bezpečnost a spolehlivost.

Thank you to all of the contributors of the State of AI Security 2026, including Amy Chang, Tiffany Saade, Emile Antone, and the broader Cisco AI research team. As artificial intelligence (AI) technology and enterprise AI adoption advance at a rapid pace, the security landscape around it is expanding faster, leaving many defenders struggling to keep […]
19. 2. 2026
Large language models (LLMs) have become essential tools for organizations, with open weight models providing additional control and flexibility for customizing models to their specific use cases. Last year, OpenAI released its gpt-oss series, including standard and, shortly after, safeguard variants, focused on safety classification tasks. We decided to evaluate their raw security posture against […]
18. 2. 2026
AI systems are evolving faster than most security programs can track. Models change, tools multiply, and agent behaviors emerge across codebases and containers. That creates a simple but urgent question: what is an AI system composed of and how is it built? The answer to that is Cisco’s AI BOM (AI Bill of Materials), now […]
11. 2. 2026
A year ago, we introduced the world to Cisco AI Defense, the industry’s first truly comprehensive enterprise AI security solution. In the year since, AI technology has evolved at an unbelievable pace, and the AI security landscape has seen seismic shifts in parallel. Teams were once concerned that their chatbots might produce harmful or sensitive […]
10. 2. 2026
Today, I’m excited to announce that Cisco is donating Project CodeGuard to the Coalition for Secure AI (CoSAI). We collectively recognize that securing AI-generated code is a challenge that belongs to the entire industry, and that open collaboration is the path forward.  Our Journey with Project CodeGuard  When we first open–sourced Project CodeGuard in October 2025, our goal was clear: make secure […]
9. 2. 2026
This blog is jointly written by Amy Chang, Hyrum Anderson, Rajiv Dattani, and Rune Kvist. We are excited to announce Cisco as a technical contributor to AIUC-1. The standard will operationalize Cisco’s Integrated AI Security and Safety Framework (AI Security Framework), enabling more secure AI adoption. AI risks are no longer theoretical. We have seen […]
6. 2. 2026
This blog was written in collaboration with Yuqing Gao, Jian Tan, Fan Bu, Ali Dabir, Hamid Amini, Doosan Jung, Yury Sokolov, Lei Jin, and Derek Engi. LLMs can sound very convincing, but in network operations, sounding right isn’t enough. Network operations are dominated by structured telemetry, long configuration states, time series at scale, and investigations […]
6. 2. 2026
When your CISO mentions “AI security” in the next board meeting, what exactly do they mean? Are they talking about protecting your AI systems from attacks? Using AI to catch hackers? Preventing employees from leaking data to an unapproved AI service? Ensuring your AI doesn’t produce harmful outputs? The answer might be “all of the […]
4. 2. 2026
This blog was written in collaboration with Fan Bu, Jason Mackay, Borya Sobolev, Dev Khanolkar, Ali Dabir, Puneet Kamal, Li Zhang, and Lei Jin. “Everything is a file”; some are databases Introduction Machine data underpins observability and diagnosis in modern computing systems, including logs, metrics, telemetry traces, configuration snapshots, and API response payloads. In practice, […]
4. 2. 2026

Sledujte nás na Facebooku