Sebastian, can you start by sharing your background and what led you to your current role as VP of Engineering at Storyblok?

My journey in the tech industry began with a deep interest in software development and a passion for creating innovative solutions. Over the years, I have held various roles in engineering and management, which have provided me with a broad perspective on technology and its applications.

Before joining Storyblok, I worked with several startups and established companies, focusing on building scalable and secure software solutions. My experience in these diverse environments has been instrumental in shaping my approach to engineering and leadership. With Storyblok, I was drawn to the company’s vision of transforming content management and the opportunity to lead a talented team in driving this innovation forward.

With 72% of CISOs concerned that AI solutions may lead to security breaches, what are your thoughts on the potential risks generative AI poses to cybersecurity?

The concerns of CISOs are well-founded, as generative AI indeed introduces new dimensions of risk in cybersecurity. AI systems, especially those that can generate content, can be exploited to create highly convincing phishing emails and social engineering attacks. The sophistication of these AI-generated attacks makes them harder to detect using traditional security measures and the ability of AI to automate and scale such attacks exponentially increases the threat landscape. To mitigate these risks, it is crucial to enhance our detection and response mechanisms, utilising AI and machine learning to identify anomalies and suspicious activities that may indicate AI-driven cyber threats.

How do you see generative AI being used to scale and automate cyber-attacks, and why is it difficult to identify these AI-driven incidents?

Generative AI can automate the creation of malicious content at an unprecedented scale, making it easier for attackers to launch widespread campaigns with minimal effort. This includes generating realistic phishing emails, fake news, and even malicious code. The challenge in identifying AI-driven incidents lies in their sophistication and variability. Unlike traditional attacks that may follow predictable patterns, AI-generated attacks can constantly evolve and adapt, making them harder to detect with standard rule-based security systems. Advanced threat detection tools that utilize AI themselves are needed to keep up with these evolving threats.

To Know More, Read Full Interview @ https://ai-techpark.com/aitech-interview-with-sebastian-gierlinger/

Related Articles -

Data Privacy With CPOs

AI in Drug Discovery and Material Science

Trending Category - AItech machine learning