E-Magazine – Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog The Official Blog from Kaspersky covers information to help protect you against viruses, spyware, hackers, spam & other forms of malware. Mon, 19 Feb 2024 10:04:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://media.kasperskydaily.com/wp-content/uploads/sites/92/2019/06/04074830/cropped-k-favicon-new-150x150.png E-Magazine – Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog 32 32 Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/interconnected-technologies-four-strategies/50464/ Wed, 28 Feb 2024 15:30:48 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=50464 Of all business investments today, interconnected technologies – the growing network of devices, systems and applications connected to the internet and to each other – are the most vital and challenging. From artificial intelligence (AI) to data spaces to internet of things (IoT,) interconnected tech is letting enterprises gather more data, automate processes and make better decisions.

While this ‘Fourth Industrial Revolution‘ is propelling change in every industry, it also brings new risks and challenges in securing enterprises and safeguarding customers.

Kaspersky’s new research aims to help businesses stay ahead of the changes interconnected technologies bring, published in this 2024 report, Connecting the future of business: How leaders should prepare for using and securing AI and interconnected technologies.

About our interconnected tech research

Kaspersky surveyed 560 IT security leaders in six regions – North America (US,) Latin America (LATAM: Brazil, Chile, Columbia and Mexico,) Europe (Austria, France, Germany and Switzerland,) Middle East and Africa (META: Saudi Arabia, South Africa, Turkey and United Arab Emirates,) Russia and Asia-Pacific (APAC: China, India and Indonesia.)

Respondents were either aware of or involved in cybersecurity decisions about interconnected technologies. They worked in organizations with at least 1,000 employees across many sectors.

We asked how they’re introducing and securing AI, web 3.0, data spaces, digital twins, augmented reality (AR,) virtual reality (VR,) 6G and internet of things (IoT.)

Four interconnected tech preparation strategies

With the scale of change these new technologies will likely bring, organizations must have strategies to prepare to adopt and secure interconnected tech.

We found organizations that felt ready to secure interconnected tech tended to adopt four strategies: Embracing security by design principles, training and upskilling their workforce, upgrading their cybersecurity solutions and striving to meet regulation and standards. Here’s a little more on each strategy.

1.    Adopting security by design principles

Cybersecurity professionals should be part of the initial design process, not relegated to the final stages. This proactive approach ensures security considerations are built in from the start, often called security by design.

Head of Cybersecurity at a leading bank in Brazil

Integrating cybersecurity into each stage of the software development lifecycle makes secure-by-design software and hardware more resilient to cyberattacks.

Our research found leaders focused on integrating cybersecurity into development processes – dubbed Security by Design Promoters – were also better prepared to secure interconnected technologies. 50 percent of Security by Design Promoters felt prepared to secure interconnected tech, compared with 26 percent of the rest.

Leadership teams can start promoting security by design by asking where in their design processes cybersecurity expertise is usually first involved. They may also play a role in ensuring budget for security testing during design.

2.   Training and upskilling your workforce

With the right cybersecurity education, staff can be a formidable line of defense against cyberattacks. Building a cyberaware culture requires a strategy that empowers employees to gain knowledge and put it into practice.

Our research found Training and Upskilling Promoters – organizations prioritizing training and upskilling their people – were better prepared to secure interconnected tech. 55 percent of Training and Upskilling Promoters felt prepared to secure interconnected tech, compared with 25 percent of the rest.

Leaders we surveyed shared cybersecurity education approaches that worked well. They favored regular programs covering topics like cybersecurity policies and common cyberthreats, phishing simulations to identify further training needs and awareness campaigns sharing best practice and real-world cyber incident examples. Leaders also emphasized user-friendly security policies, greater collaboration with the IT and security teams and demonstrating good cybersecurity practice from the top.

Leadership teams can support training and upskilling the workforce by asking how often employees receive cybersecurity training and how the organization measures their understanding.

My workforce is not yet ready for interconnected technologies. They’ll get there in time – it’s a continuous learning process. You need a proper training plan for employees to be ready.

CISO of a leading medical company in Saudi Arabia

3.   Upgrading cybersecurity solutions

More devices connecting to the internet means more ways to attack. As businesses adopt these technologies, they need more advanced cybersecurity solutions, such as those with enhanced access controls, encryption and regulatory compliance.

Our research found Advanced Cybersecurity Adopters — organizations focused on upgrading cybersecurity solutions — are better prepared to secure interconnected tech. 55 percent of Advanced Cybersecurity Adopters felt prepared to secure interconnected tech, compared with 22 percent of the rest.

Advanced cybersecurity solutions let your business adapt to evolving threats, meet new regulation and protect critical data.

The leadership team should ask what the organization knows about threats that suggest upgrading cybersecurity solutions, how to minimize the impact of upgrading solutions on workflows and how to ensure employees can use the upgraded tools effectively.

4.   Meeting regulation and standards

To avoid legal problems or reputation damage, your cybersecurity practice must meet changing standards and legal requirements.

Our research found organizations prioritizing meeting new regulation and standards — Regulation Promoters — are better prepared to secure interconnected tech. 40 percent of Regulation Promoters felt prepared to secure interconnected tech, compared with 27 percent of the rest.

To better meet regulation and standards, leadership teams can ask if the organization already meets all security regulation and standards, and if not, which to prioritize. You might also look at how to use cybersecurity compliance processes to build customer’s digital trust.

Interconnected technologies bring huge business opportunities but also a new era of vulnerability to serious cyberthreats. With more data collected and transmitted, cybersecurity measures must get stronger.

Organizations can use these four strategies to safeguard critical assets and fortify customer trust amid a growing interconnected landscape. Leaders must also resource cybersecurity well enough to enable access to new cybersecurity solutions that can meet interconnected tech’s oncoming challenges.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/interconnected-technologies-securing-ai/50303/ Wed, 28 Feb 2024 15:29:45 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=50303 Of all investments today, artificial intelligence (AI) is transforming enterprises, enabling them to provide improved services, automate processes and make better decisions. Business leaders must invest in the right tech at the right time, and have the people and systems in place that can guarantee success.

AI will change most industries but also brings new risks and challenges in securing enterprises and safeguarding customers.

Kaspersky’s new research aims to help businesses stay ahead of the changes AI brings, published in this 2024 report, Connecting the future of business: How leaders should prepare for using and securing AI and interconnected technologies.

About the interconnected tech research

Kaspersky surveyed 560 IT security leaders in six regions – North America (US,) Latin America (LATAM: Brazil, Chile, Columbia and Mexico,) Europe (Austria, France, Germany and Switzerland,) Middle East and Africa (META: Saudi Arabia, South Africa, Turkey and United Arab Emirates,) Russia and Asia-Pacific (APAC: China, India and Indonesia.)

Respondents were either aware of or involved in cybersecurity decisions about interconnected tech such as AI, web 3.0, data spaces, digital twins, augmented reality (AR,) virtual reality (VR,) 6G and internet of things (IoT.) They worked in organizations with at least 1,000 employees across many sectors. Kaspersky asked how they’re introducing and securing AI and other interconnected technologies. Here’s what the research revealed.

AI is organizations’ significant focus for the next two years

These new technologies are an evolution that started with business intelligence then evolved into data science. We keep finding new ways to gain intelligence from data, and Generative AI is an example of this.

Head of Cybersecurity at a leading bank in Brazil

AI is becoming a cornerstone for organizations striving to stay ahead thanks to its ability to foster innovation. In cybersecurity, AI offers predictive analytics and greater adaptability. Goldman Sachs says global investment in AI will reach 200 billion US dollars by 2025.

AI is more than a technology. It’s a strategy that lets organizations navigate today’s complex business environment.

AI adoption

Nearly all organizations surveyed have already adopted (54 percent) or plan to adopt AI within two years (33 percent.)

Respondents see Generative AI (GenAI) as valuable for improving security technologies’ performance. Goldman Sachs economists Joseph Briggs and Devesh Kodnani say GenAI could boost global labor productivity by more than one percentage point yearly once its use becomes widespread.

IT companies worldwide are upskilling their workforce and undertaking GenAI pilots to avoid falling behind. IT giants Accenture pledged three billion US dollars to data and AI investment, while French tech firm Capgemini is putting two billion Euros towards GenAI over the next three years.

No previous technology wave has captured the attention of leaders and the general public as fast as GenAI […] Companies will need to reinvent how they operate with AI at the core.

Julie Sweet, Chief Executive Officer, Accenture

Securing AI

49 percent of organizations said they were extremely well, or well prepared, to secure AI. Almost 50 percent think AI is slightly or not at all hard to secure.

Compared with securing other interconnected technologies, most organizations felt more prepared to secure AI. In responding to Kaspersky’s survey, the head of cybersecurity at a Brazilian bank said, “I mostly don’t find security problems with GenAI. We already understand how to run and train it.”

While many organizations feel confident in their ability to secure AI, a little over half felt less than well-prepared to secure AI. These businesses need to do more to prepare for the inevitable scale-up of AI-based technologies.

AI makes digital trust and data compliance indispensable

AI introduces new challenges and complexities in protecting sensitive data and maintaining digital trust.

Digital trust is an aspect of customer trust. In Kaspersky podcast Insight Story, emerging tech security expert Malek Ben Salem described digital trust as, “Our confidence that our privacy is protected and secured, the reliability of digital transactions and how sure we are of the identities of those we engage with.”

Tech researchers McKinsey’s 2022 Digital Trust report said 70 percent of people trust companies to protect their privacy, but most companies aren’t meeting their expectations.

Data breaches, unauthorized access and privacy violations can erode digital trust and lead to reputation damage, lost business and regulatory consequences. Building digital trust takes time but is essential to users’ confidence in AI.

How AI impacts digital trust

AI lets organizations collect and store more data about customers, operations and interactions. This wealth of data helps gain insight but also means more risk of breaches.

AI facilitates sharing data between devices, networks and organizations. Data sharing aids collaboration, efficiency and innovation but also means more opportunities for unauthorized access.

As it is relatively new, organizations may struggle to be clear about their practices and security around AI. Without clear communication, trust may break down.

Top ways organizations are improving digital trust and compliance

Adopting zero-trust frameworks is organizations’ top response to the need to improve digital trust, with 81 percent already using or planning to adopt zero-trust within two years. Other actions include pursuing data privacy and cyber resilience regulations (74 percent) and digital sovereignty initiatives (73 percent.)

Kaspersky’s research found companies introducing zero-trust frameworks are better prepared to secure AI and other interconnected technologies: 51 percent of zero-trust framework early adopters said they were extremely well-prepared or well-prepared to secure these technologies, compared with just 27 percent of the rest.

The zero-trust canvas: Painting the future of cybersecurity

The rise of AI is opening the door for organizations to break with all traditions that don’t work for today’s tech realities. Traditional cybersecurity is vulnerable to insider attack because it relies on trusting those within a network’s perimeter while viewing outsiders as suspicious.

Enter: Zero-trust frameworks. The head of IT at cloud provider Searce said, “The zero-trust security model is the only framework that works. The traditional security model can’t do the job, especially for interconnected technologies.”

Under zero-trust frameworks, every user and device must undergo continuous multi-factor authentication (MFA) and authorization, regardless of their trust level. Users and devices get the minimum access needed for their tasks, with constant monitoring of user activity, device behavior and network traffic.

Regulation and compliance

With each territory having different data protection and privacy legislation, global organizations find it hard to achieve consistent standards around AI.

62 percent of leaders find compliance certification hard. They want to see standardized compliance processes across regulations and say online tools and platforms, with clear guidelines, would further simplify certification.

Handling growing challenges and preparing to secure AI

With the scale of change AI will likely bring, organizations must have a strategy to prepare themselves to adopt and secure it.

Kaspersky research uncovered four effective strategies to ensure readiness to secure AI and other interconnected technologies: Adopting security by design principles, training and upskilling your workforce, upgrading your cybersecurity solutions and meeting regulation and standards. The full report, Connecting the future of business, details each strategy.

Interconnected technologies like AI bring huge business opportunities but also a new era of vulnerability to serious cyberthreats. With more data collected and transmitted, cybersecurity measures must get stronger.

Organizations can use effective strategies to safeguard critical assets and fortify customer trust amid a growing interconnected landscape. Leaders must also resource cybersecurity well enough to enable access to improved cybersecurity solutions that can meet oncoming challenges.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/brazil-ransomware-fundacao-casa/50341/ Fri, 26 Jan 2024 10:02:22 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=50341 There’s an awesome purpose behind São Paulo, Brazil’s 121 youth justice centers, Fundação CASA: Every day, they work with teenagers who have committed offenses and been given a court-ordered chance to learn to change their ways.

But with 5,000 teenagers accessing a network of some 10,000 devices, Fundação CASA’s information security and cybernetics team must manage a tinderbox of cybersecurity risk daily.

24 hours to escape a ransomware attack is the latest film in Tomorrow Unlocked‘s hacker:HUNTER Behind the Screens series. Fundação CASA’s cybercrime fighters tell how they foiled a ransomware attack in a day, successfully safeguarding their young clients’ personal data. There’s much any organization or business can learn from their winning formula.

Young people sentenced to learning

When young people commit a criminal offense in São Paulo, the courts may sentence them to rehabilitation through learning how to escape criminal behavior patterns. That’s where Fundação CASA comes in: They deliver that education on behalf of the Department of Justice and Citizenship.

Meanwhile, the private data of the 5,000 teens who attend one of Fundação CASA’s 121 centers must be kept secure.

Julio Signorini has worked for Fundação CASA for over 20 years. He says, “As they’re taught by the state, the teens’ data comes under the Child and Adolescent Statute (ECA.) There’s always a risk their data may be leaked.”

What is ransomware?

There are many ways the young people’s data could be leaked, including deliberate cyberattacks. Julio says, “A constant threat for us is ransomware: Malicious software that encrypts your data and demands you pay to recover it.”

To spread ransomware, attackers use social engineering to find users’ vulnerabilities.

Something familiar – like an email from a contact in their contact list or advertisements – persuades that person to click a link, downloading malware to their device.

Then the malware can spread across the network, eventually encrypting files and demanding a ransom.

Ransomware is a growing threat. Kaspersky software detected over 21,000 ransomware strains and saw attacks rise 63 percent between 2021 and 2022 as a proportion of total attacks.

Rapid response foils attack

Julio explains how Fundação CASA’s most recent ransomware incident began. “A young person brought in a compromised USB flash drive from home.”

Alex Christy Rogatti, Fundação CASA’s Head of Security, remembers the day well. “It was a tense time because it was our first experience with ransomware, but we addressed it in one day.”

Julio says their fast response started with their young client’s good decision to report something unusual. “The young person noticed his device behaving strangely and contacted our service desk, who quickly escalated the case to our information security and cybernetics team.”

Alex explains what happened next: “We isolated the infected device and recovered encrypted data on that device and our network. We isolated the malware so it couldn’t spread further.”

Should you pay the ransom?

Not every organization responds so fast and effectively as Fundação CASA. Proving there’s no low cybercriminals won’t sink to, 2023 saw some particularly vicious ransomware attacks, like a ransomware gang breaching Lehigh Valley Health Network in Pennsylvania, US, then leaking stolen photos and personal details of breast cancer patients.

It’s the severity of cases like these that tempts some victims of ransomware to pay their captors. Fundação CASA’s Chief Information Security Officer (CISO) Odenilson Dos Santos Bonfim says, “Paying the ransom should be the last option. First, there’s no guarantee they’ll let you unencrypt your data when you pay. Second, paying the ransom encourages more ransomware. Third, you’d share financial information they may use in a scam or financial crime in future.”

To help ransomware victims and deter these kinds of attacks, Kaspersky is a found partner of the No More Ransom initiative. It offers free ransomware decryption tools and advice on how to prevent and deal with ransomware attacks.

Reducing business vulnerability to ransomware

Odenilson thinks there’s much business can do to prevent more cyberattacks of all kinds.

Cybercriminals often exploit system vulnerabilities, like outdated systems. They also use malicious websites to inject corrupt information or files giving access to the user’s machine.

Odenilson Dos Santos Bonfim, Chief Information Security Officer (CISO,) Fundação CASA

“We maintain an always up-to-date environment, with effective solutions for monitoring data and preventing any type of attack.”

That a user first raised the alarm about this attack shows the importance of a cyber-aware organizational culture. Odenilson says, “It’s fundamental to encourage awareness, good market practices and publicize information security and cybersecurity information to your whole team and organization.”

How the ransomware attack on Fundação CASA was stopped echoes the importance of their work showing young people alternatives to a life of crime. By doing the right thing thanks to knowing what to do, one young person kickstarted the process that safeguarded their peers’ precious personal data. The quick thinking and coordinated action of the service desk and information security and cybernetics teams shows how strong relationships make strong cybersecurity.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/business-cyberthreats-2024/50183/ Fri, 29 Dec 2023 15:55:58 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=50183 Just as things seemed to be settling down post-pandemic, 2023 turned out to be a year of global political upheaval, conflict and climate catastrophe. Recession impacts and rising prices for business basics like electricity have seen closures rise and start-ups decline. Congratulations are in order if your organization still has its head above water.

The last thing any business needs in 2024 is a highly damaging and costly cyberattack or data leak. IBM’s 2023 Cost of a Data Breach report found, “The average cost of a data breach reached an all-time high in 2023 of 4.45 million US dollars,” having increased over 15 percent in the past three years.

Keeping track of how cyberthreats are changing and adjusting your cybersecurity strategy and resourcing accordingly will help businesses avoid the punishing cost of a breach. Here are some of the changing and new threats Kaspersky researchers think you should prepare for in 2024.

AI-related threats

AI and other interconnected technologies – think augmented reality (AR,) 6G, data spaces and more – made a huge splash for business in 2023, and we’ll soon be releasing some exciting new research about the impact of their widespread adoption. On top of that, we think these changes will loom large for AI in 2024:

1.    AI tools will make scams more convincing

Scammers use many techniques to get past our defenses. AI tools can now effortlessly create stunning images, even designing whole landing pages. Malicious actors will, of course, use these to craft more convincing materials for fraud, like fake marketing emails and login screens. Expect fraud-related cyberthreats to increase.

What to do about it: Improve your employee cybersecurity awareness education to help them become more vigilant to potentially fraudulent content. Use robust antivirus software to block scam emails and warn about suspicious websites.

2.    Concerns about pre-trained AI models will rise

As more organizations start using AI chatbots and large language models (LLMs) to help professionals with their work, privacy and security concerns around the data fueling these models will rise, especially in large businesses that deal with a lot of information. And for good reason – training common LLMs often relies on public datasets containing sensitive information, raising uncertainty about whether corporate data fed into these models will stay confidential or be used to train the model further.

What to do about it: Businesses must adopt policies limiting how employees can use AI products and educate staff about these policies to reduce the risk of data leaks. They may also adopt Private Large Language Models (PLLMs) – these models are trained only with datasets owned by the organization using them.

3.    AI regulation will ramp up

More countries and international organizations will join efforts to regulate AI in the coming year, especially African and Asian nations that are engaged in discussions but haven’t yet begun regulating AI domestically. Those already involved will expand regulation, adopting more specific rules, for example, around creating training datasets and using personal data.

With their experience developing and using AI, businesses can offer invaluable insight for discussions on AI regulation. Policymakers worldwide are actively seeking input from businesses, academia and the public on shaping AI governance.

In 2023, the Bletchley Declaration promoted greater uniformity in AI regulation. But with rising geopolitical tensions, cooperation between countries may reduce, derailing efforts to keep it consistent.

What to do about it: Ensure your business is staying ahead of developments in regulation and planning for how it will comply. Take opportunities to get involved in developing AI regulation.

More 2024 AI security predictions from Kaspersky researchers

Financial services threats

4.    Fraudsters will target direct payment systems

Cybercriminals will exploit increasingly popular direct payment systems like Brazil’s Pix, the US’s FedNow and India’s UPI for fraud. We’ll also see more clipboard malware – malware that steals data users copy to their clipboard – designed to attack new direct payment systems. Mobile banking trojans will increasingly exploit these systems as a quick and efficient means of cashing out ill-gotten gains.

What to do about it: Direct payment systems have enormous benefits for business but must be appropriately secured. Businesses can also educate customers to help reduce their likelihood of accidentally downloading clipboard malware or mobile banking trojans – often disguised as legitimate apps in trusted app stores.

5.    Mobile Automated Transfer Systems (ATS) will spread worldwide

Mobile Automated Transfer System (ATS) attacks are fairly new and involve banking malware making fraudulent transactions when the user logs in to their banking app. Mobile ATS has only been seen in Brazilian malware types but could go global in 2024.

What to do about it: Those who make banking apps must ensure their security is capable of defending against Mobile ATS.

6.    Brazilian banking trojans will also keep spreading

Cybercrime originating from Brazil is well-known to be growing. As many Eastern European cybercriminals have shifted focus to ransomware, Brazilian banking trojans will fill the void left by desktop banking trojans. Trojan Grandoreiro has already targeted more than 900 banks in 40 countries.

Grandoreiro attacks start with a malicious link in a phishing email, including fake shared documents, utility bills and tax forms. It then harvests data using keyloggers, screen-grabbers or overlays on online banking login pages.

What to do about it: Despite the growing sophistication, cybersecurity awareness education still helps employees and customers avoid falling prey to phishing. Businesses must also make sure users feel comfortable reporting their suspicions.

More financial services cybersecurity predictions for 2024

Advanced persistent threats

7.    Creative exploits of wearables and smart devices will grow

In 2023, Kaspersky discovered Operation Triangulation – a stealthy new espionage campaign targeting Apple devices. Kaspersky’s investigation found five vulnerabilities in Apple’s operating systems that affect everything Apple – from smartphones to wearable devices to smart home gadgets like Apple TV and Apple Watch.

In 2024, we’ll see more advanced attacks on consumer devices and smart home technology, including other operating systems.

Devices like smart home cameras and connected car systems are attractive for threat actors because of their surveillance potential and tendency to run on outdated software, which makes them easier to attack.

What to do about it: Secure your business Internet of Things (IoT) devices and ensure that if they don’t need to connect to the internet, they don’t.

8.    Supply chain attacks-as-a-service: Bulk-buying access

There is a growing trend of attacking businesses through their suppliers. Small and medium companies that may lack advanced protection become gateways for hackers to access the data and infrastructure of big players. 2022 and 2023 saw breaches through identity management company Okta, which serves over 18,000 customers worldwide.

What to do about it: Supply chain attacks expert Eliza-May Austin has great advice on preventing supply chain attacks in this Tomorrow Unlocked video:

9.    More attacks on Managed File Transfer systems

Managed File Transfer (MFT) systems that let businesses securely exchange sensitive information with partners have become a cornerstone of organizational efficiency. But housing confidential information like intellectual property, financial records and customer data puts them in the crosshairs of cyber adversaries.

MFT systems’ intricate architecture and integration into business networks also means potential security weaknesses. In 2023, MFT system incidents involving MOVEit and GoAnywhere confirmed this.

What to do about it: Undertake comprehensive reviews of your MFT solutions to identify and reduce security weaknesses. Implement robust Data Loss Prevention (DLP) solutions, encrypt sensitive data and build a cybersecurity awareness culture among employees.

More advanced persistent threat predictions for 2024

With attention to where threats will likely grow, the year ahead need not be daunting for business. From getting ahead of AI regulation to improved cybersecurity awareness education, organizations of all sizes can stay secure even as threats grow.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-ai-ethics/50030/ Wed, 13 Dec 2023 12:08:13 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=50030 We live in a world where algorithms can make decisions and data fuels innovation. It means ethical considerations are more critical than ever for business. They must balance using new technology to increase competitive advantage while preserving integrity and protecting customers.

In our podcast Insight Story, experts Tomoko Yokoi (Switzerland,) senior business executive and researcher at Global Centre for Digital Business Transformation, IMD Business School and Andy Crouch (UK,) consultant and co-founder of ethical-AI natural language processing company, Akumen, outline how AI biases can impact business and what steps they can take to ensure its fairness. Kaspersky Global Research and Analysis Team’s Dr. Amin Hasbini expands on the privacy and responsible data use implications.

Not all AI is created ethically equal

Andy Crouch, business development consultant, Akumen

Andy’s company Akumen found a problem that needed solving. “Scores out of five are useful, but we wanted insight from written responses like product reviews, and there was no way to do it. The team created an AI solution to identify meaning like topics, emotions and sentiment. Sentiment measures opinion – positive, negative or neutral – but emotions drive behavior. It works on text feedback anywhere, which might be about consumer goods, healthcare or anything else.”

Their approach uses AI differently from generative AI tools like ChatGPT. “Our AI is rule-based, human-created and human-curated. It’s completely transparent and there are no algorithms as with large language models. We can dive in and make rules more nuanced if we recognize bias. With large language models, that would be complex and expensive.”

Andy expands on generative AI’s limits for truly understanding people. “We asked ChatGPT how many emotions humans experience – it said 138,000. That doesn’t help us understand what drives behavior. Our platform has 22 emotions – enough to see what drives behavior. Through our partner, Civicom, we’re helping the UK’s national health service (NHS) to understand what patients and staff experience.”

And that understanding can improve lives:

Using AI to understand people’s emotions and what they’re talking about, you can quickly extract reliable insights. And if anyone questions things, you can show why the system’s highlighted this and, if needed, modify.

Andy Crouch, consultant and co-founder, Akumen

Larger language models use big data pools, but there are also more contained, enterprise-level tools like ChatGPT Enterprise that businesses can furnish with their own data and control how they use it.

Tomoko sees enterprise-level tools as useful but notes they can’t do what big data can do. “Organizations are developing new functions around AI, like data annotators, who clean data before it goes into models. But is it foolproof? The beauty of using data from everywhere is it gives you insights you otherwise wouldn’t get.”

Choosing ethical suppliers

Luckily for companies using AI ethically, more businesses are adopting digital responsibility policies and choosing ethics-first suppliers.

Tomoko gives an example. “Deutsche Telekom has been a pioneer in AI ethics. They’ve trained all employees to ensure AI ethics are distributed throughout the organization. At the same time, they have about 300 suppliers and ensure it’s in all their contracts. So it goes beyond the boundaries of the company.”

But many businesses don’t know where to start. Tomoko says, “Over 250 companies have committed to AI ethics, but codified mechanisms only help if they change behavior. How can we live these principles and ideals? External experts can help, and there’s a case for individuals taking responsibility, which will have a collective impact.”

She suggests how companies frame AI ethics matters. “You can see AI ethics as value or as compliance. If it’s compliance, it will be cost- or risk-driven. But AI ethics could also be a competitive advantage.”

Andy compares AI ethics to health and safety. “If you have a health and safety director, it’s only one person’s responsibility. Change won’t happen unless everyone understands health and safety’s importance, and especially that it drives productivity and revenue.”

The competitive advantage is real. McKinsey research found 72 percent of customers considered a business’s AI policy before making an AI-related purchase.

Tomoko highlights the importance of backing up policies with action.

Companies making public commitments must change as an organization, embedding new practices. Have a grand goal of committing to AI ethics and digital responsibility, but divide it into tangible, more easily executed sub-goals.

Tomoko Yokoi, senior business executive and researcher, Global Centre for Digital Business Transformation, IMD Business School

Which AI issues should companies care about?

Tomoko outlines three places to look. “First, consider the software development lifecycle. If you’re considering developing an AI product, think of how it’s designed. Look for bias in the data.

“Second, once it’s being developed, although many companies say they’re implementing AI ethics, people developing AI-driven products don’t know how to apply those principles. So, look at how people use ethical principles in day-to-day software development.

“Third, we test products in controlled environments. Once it launches, ask who is monitoring it and how we ensure it doesn’t gather bias and that people use it correctly.”

Tomoko is part of IMD Business School and knows that what future executives learn about AI ethics will shape future companies’ ethical behaviors with AI. She says, “First, we say everyone has a responsibility to these issues that goes beyond the company. You need to be aware of this responsibility, but also be able to make others in your team aware.”

Secondly, “What type of organizations do we want to build? We coach people to be able to handle multiple goals – not only profit but also social, environmental and ethical goals. We want them to walk away thinking of the future.”

Andy drills down into the data AI is using. “Understand how the AI model is built. Is the data you’re analyzing through that AI model ethically sourced, and are you using it ethically? The lack of transparency over large language models is rife for ethical risk and bias.”

AI training data bias can have life-threatening impacts. Poor AI translations have been found to be jeopardizing asylum claims. Andy sees retrieval-augmented generation (RAG,) which uses more proofed datasets, as part of the solution.

Can we have secure and well-regulated AI?

Dr. Amin Hasbini. Head of Research Centre, Middle East, Turkey and Africa for Kaspersky Global Research and Analysis Team, thinks AI ethical standards are needed. “AI won’t self-define its ethics. They must be programmed with ethical standards.”

Since there is almost no way the public can evaluate, critique or improve AI ethics, regulation must play a part, according to Amin. “We need security and safety by design, and continuous verification of it. That would require transparency, especially from big tech vendors, and letting the public influence how these technologies develop.”

He likens the challenge to that of regulating social media. “We’re asking people to adopt technologies that can do much damage without giving them ways to ensure that doesn’t happen. The same has happened before with social media, with it being used for data leaks and fake news. European Union regulation is moving fast around AI, but AI could be much more dangerous than social media – we need rules now.”

For improved ethical data use, Amin recommends asset management controls. “If well deployed, asset management controls allow data to be classified, including which is available to AI, which can be shared publicly and which needs to stay inside the organization.”

Andy says regulation is hard in this fast-moving space because no one knows what’s coming next. “I question anyone saying they know what will happen in the next six months or beyond. But there’s a lot of fear and lobbying going on – so go slow. If your AI-driven capability can’t deliver because it’s non-compliant, ethically or otherwise, it will be damaging.”

However, he believes regulation is necessary. “It will be interesting to see how they regulate something that’s not easily defined and morphs quickly, but we must protect those who need protecting.”

Kaspersky has recently proposed six principles for ethical use of AI in the cybersecurity industry with transparency at the core.

Getting started with AI ethics

Our experts have straightforward advice for those business executives yet to approach AI ethics.

Tomoko Yokoi, senior business executive and researcher at Global Centre for Digital Business Transformation, IMD Business School

Tomoko says, “As a mindset, remember the analog and digital worlds are the same. Your analog-world values should extend into the digital world.”

Andy highlights the need for both widespread knowledge and deep expertise. “Get your whole team conversant with AI, but have a well-informed friend who lives and breathes this stuff to call when there are challenges.”

With headlines about AI taking our jobs and AI founders like Geoffrey Hinton sounding the alarm on unregulated AI perils, it’s easy to write off AI ethics as a problem too hard to fix. But these complex issues need priority.

There are green shoots of change. In December 2023, the AI Alliance launched to focus on developing AI responsibly, including safety and security tools. Its 50 members include Meta, IBM, CERN and Cornell. The message may be, ‘Let’s not move too fast and not break things.’ With OpenAI, creators of ChatGPT, not invited to the party, could the tortoise of collective corporations beat the nimble hare of innovation?

AI gives business potential for great gains but comes with great risks to reputation, security and privacy. With strong ethical AI policies translated into action and widespread knowledge among employees, businesses can have more confidence to take advantage of AI’s many benefits.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-digital-sovereignty/49976/ Wed, 06 Dec 2023 18:43:50 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=49976 Data. It seems businesses can’t live with it – without big friction – and can’t live without it. Cory Doctorow wrote for Secure Futures in 2020 that data – rather than being the ‘new oil’ – can fast become toxic waste, without due attention to minimization and proper management.

Now data fuels almost every business, it’s critical leaders understand the next evolution of data regulation – digital sovereignty – and how it applies to their work. It’s not just about complying with the law, but an opportunity for competitive advantage with the bonus of enhancing customer trust.

In our podcast Insight Story, experts Ben Farrand (UK,) Professor in Law and Emerging Technologies at Newcastle University and Sille Sepp (Finland,) Director of Operations at MyData Global, explore how business can do better in an increasingly interconnected world. Dr. Amin Hasbini of Kaspersky’s Global Research and Analysis Team talks about the role of transparency in digital sovereignty, and why sharing intelligence makes good business sense.

What is digital sovereignty, and what does it mean for business?

Ben says, “Digital sovereignty is about geopolitics. We’re seeing increasing tensions between states and concern over the power of big market players like Google, Amazon and Facebook in the US and Chinese companies like Baidu, Alibaba and Tencent. When data moves between countries, there may be detrimental impacts from geopolitics and national regimes, regulations or policies.”

Against this backdrop, the EU introduced its General Data Protection Regulation (GDPR) in 2016, providing that identifying, personal or sensitive data about EU citizens kept anywhere in the world is governed by EU law.

GDPR and digital sovereignty more broadly mean companies must think about the legal implications of their cross-border interactions.

Ben Farrand, Professor in Law and Emerging Technologies, Newcastle University

Ben recommends firms wanting to trade more internationally have people dedicated to data security and digital sovereignty. “Companies handling EU citizens’ personal data need someone aware of how to comply with GDPR’s requirements. In the EU and UK, they’re often called data protection officers. They must know data security protocols, and how data can and can’t be used.”

Can state concerns and business interests align?

Businesses often don’t have the same priorities as nations. Is digital sovereignty making it harder to trade internationally?

Ben says, “Digital sovereignty covers the whole supply chain from raw materials to cybersecurity services. For example, when supply chains for semiconductors collapsed during the pandemic, shutting some car manufacturers, the EU, China and the US started seeking more control of resources.”

But he questions the merit of attempts at strategic autonomy in today’s world.

Interdependence is the nature of 21st-century life. No matter how much desire for increased control and demonstrating countries’ sovereignty, supply chains are global, making cooperation essential.

Ben Farrand, Professor in Law and Emerging Technologies at Newcastle University

Getting ahead of compliance with data ethics

Digital sovereignty is on the radar for many businesses, but those who go the extra mile to do the right thing, rather than simply comply, may see advantages.

Ben says, “We’ll see both ‘stick’ and ‘carrot’ regulation. There’s much legislation about investment, like the US CHIPS and Science Act, around how companies can cooperate and seek funding to diversify supply chains, for improving trade relations with third parties in ways that build resilience.”

Businesses should also note a growing focus on regulating social media platforms, given its role in communicating with customers.

The EU recently passed the Digital Services Act. It applies to content on social media platforms and search engines that may be illegal in member states, such as hate speech or trade in prohibited goods,” says Ben.

The legislation isn’t concerned with individual instances of illegal content but requires social media platforms and search engines to have transparency, accountability and oversight systems.

Ben sees benefits for business. “Companies don’t want to be associated with illegal or immoral things. They already manage this by pulling advertising or moving over to other platforms. Digital sovereignty isn’t just coming from regulation, but market-based decisions.”

AI and digital sovereignty

The explosion of AI systems has made it even more important that organizations are clear on data ownership.

Ben sees regulation in this field coming soon. “The EU is talking about regulating high-risk systems because they believe AI shouldn’t be doing some things.”

And it’s not just the EU. “At the AI Safety Summit hosted by UK Prime Minister Rishi Sunak in November 2023, there was an agreement between the EU, US, China and others to take a united approach in managing high-risk systems.”

Again, there are business opportunities. “New technologies arise from regulation like the ‘federated computing‘ approach, where you train AI on data sets remotely then communicate only the outcomes centrally, minimizing data privacy risks,” explains Ben.

Making space for data security and data power

All businesses want to use their data better for innovation and growth. But it may be challenging to do so securely, fairly and in ways customers trust. Data spaces are one tool for addressing safe and ethical data use while leveraging its power for better business.

Sille Sepp, Director of Operations, MyData Global

Sille Sepp is Director of Operations at human-centered data non-profit MyDataGlobal. She explains data spaces: “They’re a systemic approach to increasing trust and sovereignty in sharing and using data across organizations and sectors. It’s not just technological infrastructure but includes business, legal and operational layers for trustworthy data sharing.” Examples include the Smart Connected Supplier Network for manufacturing supply chains and Catena-X for automotive supply chains.

MyData is involved in a cross-data space project, Data Spaces Support Center, and preparing for the Data Space for Skills. Businesses can join projects as a data provider, data user or enabling service, or look at creating their own data space with collaborators.

Sille recommends how to start: “As a partner in the Data Spaces Support Center, I recommend exploring the website, the repositories of initiatives and the contributing partners. Associations like International Data Spaces, Gaia-X and FIWARE have long mapped data space work.”

Data spaces go beyond EU-based businesses. “Many initiatives collaborate with international partners and look at business cases that will bring global benefits,” says Sille.

Sille highlights the need to put customer control of their data at the heart of developing data spaces. “When developing design principles and architecture for data spaces, we must embed human-centric principles. There are ways to empower people through design choices. Businesses can get involved as enabling services – intermediaries that serve the interests of individuals, managing permissions and returning value to them.”

Transparency’s role in digital sovereignty

Dr. Amin Hasbini of Kaspersky’s Global Research and Analysis Team explains how tech businesses can use transparency centers to uphold different states’ digital sovereignty worldwide. “Digital sovereignty means companies, especially multinationals operating in digital, need tools that comply with regulation in countries where they operate. As an example, Kaspersky operates transparency centers in several countries. These let entities check code to ensure it complies with their laws.”

But he thinks international firms can go further in building regulators’ trust.

Cooperation is the way to go. We start by sharing intelligence about recent attacks in the region. This brings discussions into a better place.

Dr. Amin Hasbini, Global Research and Analysis Team, Kaspersky

Sharing also benefits the industry as a whole. “We publish many of our threat intelligence findings, which means better threat visibility for everyone. The cybersecurity community can build on our findings,” says Amin.

Taking the first steps into digital sovereignty

Ben advises businesses to consider the basics as they start exploring digital sovereignty. “What data are you collecting and where does it go? Once you know that, you can think through the implications: Security provisions you may need to protect data within your company and outside. Think first about why you need this data, and that will help you define every step of the process.”

Let’s be super clear: Digital sovereignty, like all matters of global trade, is complex. It’s not an off-the-shelf charter or compliance checklist. Enterprises should grasp the opportunity now to get involved with data spaces and other cooperation projects to shape the new international standards for data.

Digital sovereignty regulation will only grow – particularly around the use of AI. Businesses should think through how they use data and what security should be in place to safeguard it. Digital sovereignty also presents opportunities for better business, such as through data spaces and transparency initiatives.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-technology-roi/49932/ Wed, 29 Nov 2023 12:26:31 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=49932 In a tech store, it’s tempting to convince yourself you need the latest shiny gadget. That device upgrade will radically improve your productivity. You leave with a bag of stylish little boxes and a dented credit card. Enterprise tech decision-makers also struggle with these magpie-like impulses. Recent research found GE, Ford and other major players poured 1.3 trillion US dollars into digital transformation. 70 percent of it – $900 billion – went into failed programs. That’s quite some buyer’s remorse.

Businesses know they need tech for competitive advantage – but which tech? And when they’ve got it, how do they use it effectively? In our podcast Insight Story, Olivier Blanchard, Research Director for future analysts The Futurum Group, and Arun Narayanan, Chief Digital Officer at renewable energy service providers RES, share how business leaders can ensure best bang for their new tech buck.

Every enterprise is different

Olivier says there’s no one-size-fits-all approach to getting better tech returns. “A major mistake is picking tools because other companies are using them. A solution might be good for your problem but might not suit your company’s technical capabilities and cultural processes.”

Instead, he suggests figuring out what you need the technology to achieve. “Understand your problems and look for the best solution to help them.”

Arun, who has overseen digital transformation in several big mining and oil and gas companies, echoes Olivier’s belief in starting in the right place. “Don’t rush around trying to work out what a technology can do for you. Instead, think about how you want to change your business. Do you want to delight customers in new ways? Make employees safer and more productive? Get better returns for investors? Imagine the business processes, then find the technology to make them work.”

He gives an example from sectors he’s worked in – oil and gas, mining and renewable energy. “What they have in common is the work is in remote places. They now have remote operations centers giving information on screen to those who can do the work but can’t spend days to weeks away.”

But the aim isn’t to replace those working on-site. “Successful digital transformation will balance what’s operationally correct at a site and what a remote center can know and understand. You can’t take all control away from operations, nor can those making minute-to-minute decisions on site examine long-term trends and patterns.”

Evolution or revolution?

Olivier thinks senior leadership can contribute to improved returns by appreciating the size of change possible. “If you want to become more competitive within your existing understanding of your company, then it’s about finding problem areas to optimize.”

Olivier Blanchard, Research Director for future analysts The Futurum Group

If you want to reimagine your business, that’s when things get interesting. Tech becomes a series of gateways for your company to expand into, driving its evolution.

Olivier Blanchard, Research Director, The Futurum Group

Arun suggests one place to start. “You’re spending a lot of money on this, so find the waste. For example, if you don’t know where the oil is and you’re drilling in the wrong location. The business question is, how do we drill more in the right place?”

The answer isn’t always high-tech. “Once you understand the business question, you’ll know what technology to find. If software from the 1990s can solve your problem beautifully, use that. Or if you find nothing available, technology like AI might solve it.”

Getting the people on side

Failing to bring your people along on the tech journey can be expensive. Many fear technology will replace them, with Elon Musk saying there’ll be no jobs for people in the future. How can leaders ensure people understand tech is there to help, not put them out of work?

Arun says, “When calculators came out, they didn’t replace accountants – but they let them do a better job, faster. Generative AI solutions like chatbots and document generators are tools – they aim to do the mundane piece fast, then the artistic, skillful and human piece can follow.”

Arun has had much experience getting people on board with tech integration in industrial settings. “Technology change will create many new jobs. The economic models will change, jobs will be created and lost.”

Arun Narayanan, Chief Digital Officer at renewable energy service providers RES

The only person who can protect you in this transition is yourself, and education is the only tool that can protect you.

Arun Narayanan (US,) Chief Digital Officer, RES

“We’ll still need project managers, HR and people who can communicate. You don’t need to be a technical expert to take part in the transformation.”

Olivier highlights that cost savings can flow from upskilling employees. “It’s easier for someone to learn a new skill than learn everything about a company from scratch. Focusing on employee retention and development rather than letting workers fall by the wayside makes good financial sense.”

Cybersecurity education’s role in new tech adoption

Dr. Amin Hasbini, Head of Research Center Middle East, Turkey and Africa for Kaspersky Global Research and Analysis Team (GReAT,) thinks successful tech adoption means getting everyone up to speed on cybersecurity.

But many companies aren’t yet doing it. Amin says, “A recent Kaspersky and Longitude study found organizations that emphasize cybersecurity training have 25 percent more readiness for dealing with security incidents from employees.”

The most effective forms of cybersecurity education are generally immersive. Amin says, “Gaming and VR are great for security training because they allow better engagement and learning. Gamification and competition challenges improve managers and director safeguarding decisions.”

Some approaches are even more immersive. “The most successful involve simulating a real attack on the organization when only a few people in top management know it’s a simulation. They test employee capacity and how well teams cooperate to contain and close down an incident.”

Training must go beyond the IT team. “Security everyone’s job, from desk employees in a remote office to top managers. If everyone is aware, it stops incidents getting big.”

How much risk is right?

Arun says businesses must accept that in making room for success, some initiatives will fail. “If you’re reimagining with a bold vision, you’re guessing about the future. Some guesses will come true – others won’t. If your structure has no room for failure, you sell out the opportunity for innovation.”

He says one way is to make a few calculated smaller bets rather than one big bet. Even if some fail, it’s better than not taking those risks.

Olivier points to lower-risk ways to trial solutions. “You don’t have to buy all your technology anymore. With cloud services and managed services, fantastic industry partners can help you test, implement or scale up new tech. There’s a giant menu of opportunities to get expert help.”

With enterprises finding many tech investments don’t pay off, there’s much potential for competitive advantage in smarter exploration of new tech options. Business leaders may find more implementations succeed if they start with the right problems and upskill their people as part of the transformation.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-quantum-computing/49559/ Wed, 08 Nov 2023 08:04:40 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=49559 In Douglas Adams’ 1979 science fiction comedy novel The Hitchhiker’s Guide to the Galaxy, a supercomputer called Deep Thought takes 7.5 million years to answer the question, “What is the meaning of life?” Unlike Adams’ monolithic supercomputer, quantum computers will be like today’s fiber broadband compared to 1990s dial-up internet. Google’s quantum computer can perform a calculation 158 million times faster than classical computers.

Quantum computing’s sped-up computations could reshape nearly every industry. But it also has strange quirks that make it hard to use in ways that beat classical computer abilities.

At least, that’s how things are today. Businesses that value innovation are throwing their hat in the quantum ring to ensure they’re ready for sweeping change. Should your business get involved – and if so, how?

In Insight Story Season 2 Episode 3, I discuss quantum computing’s relevance to business with Dr. Oliver Thomson Brown (UK,) Chancellor’s Fellow at EPCC (formerly Edinburgh Parallel Computing Centre,) University of Edinburgh and Dr. Henning Soller (Germany,) Partner and Director of Quantum Research, McKinsey & Company.

What is quantum computing?

Quantum computers are the next step up from the powerful supercomputers that often power AI applications today. They’ll be part of our computing future but have a starting role today.

Quantum and classical computers are very different beasts – in fact, they can’t even communicate with one another. Quantum data is different, too – for example, you can’t copy it. Some quantum computers must be kept at very low temperatures in purpose-built fridges.

These features may not sound encouraging, but quantum computers have potential to do some things much better and faster. These abilities could be game-changing in many sectors, including finance, chemicals and logistics.

How could business use quantum computing?

Despite quantum computers currently being about as sophisticated as the classical computers of the mid-20th century, business is interested.

Henning advises companies throughout Europe and the Middle East on large-scale IT and data transformations. He’s confident quantum’s impact will be huge. “We haven’t yet proven a case where quantum computers outperform classical computers, but we’re certain this technology will be the major disruption for the business in the coming decade. We estimate economic potential of at least several trillion US dollars.”

Oliver’s research focuses on the interaction of quantum and high-performance computing. He says quantum could help with problems where the solution is a sequence of binary numbers. “That might include logistical problems around deciding where things go. For example, how do you fill beds in a hospital ward? Or flow- or network-type problems, like traffic management. Quantum computers may also have advantages when finding the best configuration for a molecule, like in drug discovery.”

Quantum might also work well for problems that scale poorly on classical computers. “We’re looking at problems in aviation and genomics. Genome assembly scales factorially – with a quantum computer, we may be able to reduce that by a lot.”

Henning adds, “We could use it for experiments that we need humans or animals for today. We could test chemicals much faster and transact money in different but completely secure ways. Some banks are using it for derivative pricing and Monte Carlo simulations.”

It’s something of a hammer looking for a nail, he says. “It’s not about having an advantage today, but being able to exploit the advantage tomorrow.”

Quantum challenges

Oliver says, “Adoption will be driven by how easy they are to use.” But today, quantum computers aren’t easy to use. “You have qubits, and multiple qubits form a register. Then you set up quantum gates in a circuit that apply to that qubit register and transform it – hopefully – into a state representing the solution to your problem.”

Dr. Oliver Thomson Brown (UK,) Chancellor's Fellow at EPCC (formerly Edinburgh Parallel Computing Centre,) University of Edinburgh

Dr. Oliver Thomson Brown (UK,) Chancellor’s Fellow at EPCC (formerly Edinburgh Parallel Computing Centre,) University of Edinburgh

If you didn’t understand that, don’t worry – classical computers don’t get it either. That’s one of the biggest challenges that must be overcome for quantum to go mainstream, says Oliver. “You have results stored as quantum information on qubits in your qubit register, but you can’t read that out directly into a classical computer. So what do you do with it?”

And that’s not all. “You can’t copy quantum information. That’s great for security but terrible for computing,” says Oliver.

With technology this imperfect – and let’s face it, confusing to anyone without a computer science PhD – the business benefits are less obvious. Henning suggests looking at the bigger picture.

People ask, can we do better derivative pricing on a quantum or classical computer? At this stage, it’s on the classical computer. But what’s the stakeholder and shareholder value in having a better outlook on innovation?

Dr. Henning Soller, Partner and Director of Quantum Research, McKinsey & Company

 Dr. Henning Soller (Germany,) Partner and Director of Quantum Research, McKinsey & Company

Dr. Henning Soller (Germany,) Partner and Director of Quantum Research, McKinsey & Company

“There is value in investing in quantum computing already, but not in profit and loss terms.”

Is quantum a security threat?

A 2022 study claimed that it will soon be possible to crack the most established crypto algorithm by combining classical and quantum computing.

It comes down to the role giant prime numbers play in encryption. Oliver says, “The first big quantum computing application was for factoring large prime numbers. It has no practical use except that we use large prime numbers for encryption. But security people now know it’s not a good base for security, and they’re looking at alternatives. It must be something quantum computers are bad at.”

Dr. Amin Hasbini, Head of Research Centre Middle East, Turkey and Africa for Kaspersky’s Global Research and Analysis Team agrees the cybersecurity industry is one step ahead.

Current cryptography looks stone-age compared with quantum processing, but we’re already developing quantum-proof encryption.

Dr. Amin Hasbini, Head of Research Centre Middle East, Turkey and Africa, Global Research and Analysis Team (GReAT,) Kaspersky

“Quantum computing will also make encrypted data harder to hack, but hackers will find ways to adapt as they always have.”

Henning sees the need for improved encryption as a quantum business opportunity. “Quantum cryptography is a whole new industry. It’s a major area of active investment.”

He doesn’t see hackers accessing quantum tech just yet. “This is elaborate technology – not something you can steal and easily operate.”

Oliver thinks one quantum quirk might foil decryption attempts. “The error rate for quantum computers is huge compared to classical computers. You’re likely to hit an error with any big circuit. You’d need a very large quantum computer, and we’re nowhere near that.”

What will quantum computing’s future look like?

What are the next steps to unlocking the potential of quantum computing? Henning says, “The first machines will be hybrid by design because programming them will require a classical computer. Several technologies will need to come together to operate them successfully. They must intelligently break down a problem into what suits each technology and then put the overall result together.”

He suggests business leaders embrace the future. “This revolution is coming – not tomorrow, but in a foreseeable timeframe. It may make sense to set up a smaller team to scout the technology and identify use cases. Look for opportunities to partner with quantum startups. Your business can bring the knowledge of problems that need solving.”

Oliver agrees it’s not about having all the expertise in-house. “Build partnerships with specialists who understand how to get the most of it.”

Quantum computers can’t do much today, but their potential is undeniable. Businesses exploring this new realm are investing in a reputation for innovation and the possibility of solving today’s hard problems. Difficulty accessing and using the technology should keep hackers at bay for now, but cyber researchers are working on new encryption methods that quantum computers can’t crack.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-iiot-industry/49507/ Fri, 03 Nov 2023 10:55:31 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=49507 Imagine every part of the workplace – from manufacturing equipment to energy grids, healthcare devices to farms – had the connectivity of a smartphone. That’s the Industrial Internet of Things (IIoT) – sometimes known as Industry 4.0. It brings a host of efficiencies, like real-time data-analysis and improved predictive maintenance.

But with great connectivity comes great responsibility. IIoT can be especially vulnerable to attack. And while it’s now widely used, many businesses know their IIoT systems are poorly protected.

In the second episode of Insight Story season 2, guests Chris Kubeska, Netherlands-based security researcher, cyber warfare specialist and CEO of HypaSec, and Alison Peace, patient management operations manager for UK and Ireland at Medtronic, illuminate how industry can use and protect game-changing IIoT.

Where is IIoT most commonly used?

All sectors are using IIoT, but some more than others. Chris says, “The maritime industry uses IIoT a lot. It’s also widely used in the space industry, medical devices and critical infrastructure.”

Alison Peace, UK and Ireland Patient Management Operations Manager for medical therapy and device producers Medtronic

Medtronic is a global developer and producer of medical devices and therapies like insulin pumps, pacemakers and implantable defibrillators – all increasingly connected. Alison explains the patient benefits: “In the UK, more than 100,000 patients receive an implanted cardiac device each year. They then have constant hospital checks, which places a burden on healthcare services. Remote monitoring for cardiac devices started in basic form almost 20 years ago. Devices can now send wireless alerts if they detect a problem. Data shows patient outcomes are better – they don’t go into hospital as much.”

Cybercriminals have noticed loosely protected IIoT

Despite its relative newness, there have been many documented attacks on IIoT.

Chris Kubecka

Chris Kubecka, security researcher, cyberwarfare specialist and CEO of HypaSec

In 2014 an attack on a German steel mill’s IIoT systems killed three people and injured many more. The attacker gained access to the mill’s office network then compromised its industrial control system. The compromise prevented a blast furnace shutting down, leading to an explosion.

Even without a breach, users finding vulnerabilities in everyday tech means reputation-damaging headlines. At-home stationary fitness bike makers Peloton were embarrassed when a security researcher found their gear included an open channel that allowed access to users’ private information like weight, gender and date of birth.

Similarly, a hacker accessed footage from Verkada internet-connected security cameras.

Securing industrial smarts

Dr. Amin Hasbini, Head of Research Center Middle East, Turkey and Africa for Kaspersky’s Global Research and Analysis Team (GReAT,) is concerned about the gap between businesses who use IIoT and those who fully secure it. “A recent Kaspersky study found over 60 percent of businesses use IoT. But close to half say these systems aren’t fully protected. A third of these organizations blame lack of budget, but when it’s not resources stopping them, what is it?”

Whatever it is, senior leaders in organizations using IIoT must shift the barriers to best-practice security.

Amin says, “Some technology vendors race to add features while largely ignoring security.”

When vendors demonstrate a solution out-of-the-box, it’s always as magnificent as a butterfly. But once implemented and confronting real-life scenarios, it’s as vulnerable as a butterfly too.

Dr. Amin Hasbini, Head of Research Center Middle East, Turkey and Africa, Global Research and Analysis Team (GReAT,) Kaspersky

The challenge starts at the top in each organization. If security becomes a priority, it gets translated into policies, guidelines and methods.”

Chris advises thinking about how when security may not be front of mind in your organization’s tech decision-making. “Your procurement department will be looking for the least expensive deal, but that deal might not include the best security.”

She continues, “Many IIoT systems come with older operating systems that don’t have the security settings you’d want. And then, there may not be a secure way to update the software. These are some of the risks. Know what you’re buying so you can plan ahead and mitigate those risks.”

Alison says medical devices are now made differently to ensure security. “It’s important to incorporate an encryption module to make sure others can’t read the device’s data. Our devices don’t connect to the internet, but use a pass-through to a monitor or app. Data is encrypted in the device and sent encrypted.”

Alison believes the high standards of institutions they work with helps give patients confidence in their devices. “In the UK and Ireland there are strict controls when health systems engage third parties. You must have rules, regulations and systems in place to work with hospitals.”

Chris also has recommendations for contracts with third parties. “For encryption, your contract should specify meeting the standards of the time. So when you renew, the expectation is to keep to those standards. Have a responsible disclosure policy, and ensure your suppliers have good data security and privacy policies.”

She says don’t be shy to end a technology vendor relationship if security conversations feel awkward. “If you don’t feel comfortable speaking about cybersecurity and privacy with your supplier, look for a new one. Look at suppliers who take part in security conferences. If they’re actively looking at security, that gives credence.”

Is regulation keeping up with IIoT?

Chris thinks business should expect regulation around IIoT to speed up. “The tech’s definitely moving faster than law, but there are some guidelines and frameworks.”

More governments and industry are aware of the potential risks. We’re tackling this problem: We’re able to talk about it with people who aren’t super tech nerds.

Chris Kubeska, security researcher, cyber warfare specialist and CEO, HypaSec

Alison feels those designing IIoT must allow for changing security requirements – something Medtronic has strived for. “Our global security office is tasked with making sure our devices comply with standards worldwide. National legislation could say, we want the data from your device in this format, and our devices are designed to enable that.”

Simplifying a tangled net of things

Chris thinks IIoT manufacturers could aspire to lead in many ways, but chiefly, making levels of security simple to understand. “Start applying what I call ‘easy standards,’ like a traffic light system, so consumers and companies can know if something has a minimum level of security – for example, can it be updated? Medical uses would need a higher standard compared with consumer home grade.”

Alison agrees that clear and standard practice matter. “A third party can easily comply with clearly communicated, standardized security requirements. Open communication and clear criteria are essential.”

IIoT is already commonplace and will only grow as those organizations yet to adopt see its potential to improve productivity and reduce costs. As technology becomes increasingly connected, securing IIoT is fundamental to the safety of just about everything in our lives. As Chris warns, “I want to retire knowing my own technology won’t kill me.”

Leaders can keep their organization and customers safe by asking the right questions and pursuing IIoT vendors who prioritize security.

]]>
full large medium thumbnail
Cybersecurity & Technology News | Secure Futures | Kaspersky https://www.kaspersky.com/blog/secure-futures-magazine/insight-story-generative-ai/49221/ Mon, 16 Oct 2023 10:29:49 +0000 https://www.kaspersky.com/blog/?post_type=emagazine&p=49221 Headlines about the power of generative artificial intelligence (gen AI) are everywhere. But business leaders are asking, is it really that game-changing, or just a fad? And if it is a game-changer, how can you make the most of it?

In the first episode of our second season of Kaspersky’s podcast Insight Story, I speak with experts Shagun Sachdeva (India,) Project Manager for Disruptive Tech at business intelligence service GlobalData Plc and Karen Quinn (UK,) Senior Director, Brand and Corporate Communications at financial software providers, Finastra.

What is generative AI?

TechTarget defines gen AI as any “artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data.” Some applications rely on Natural Language Processing so users can ask for outputs using everyday speech.

Well-known examples of gen AI include OpenAI‘s chatbot ChatGPT and Google Bard, but current and potential business uses are near endless.

Insight Story: Shagun Sachdeva (India,) Project Manager for Disruptive Tech at business intelligence service GlobalData Plc

Shagun Sachdeva, GlobalData Plc

Shagun thinks many businesses already understand its importance. “A recent GlobalData Plc survey found more than 50 percent of businesses expect gen AI will tangibly disrupt their industry in the next five years. 30 percent are already using gen AI tools.”

How are businesses using gen AI?

“Business leaders are having more sophisticated conversations about how gen AI can go from an experiment to giving strong returns on investment,” says Shagun. “Whether it’s chatbots engaging with customers, sharing financial tips and customizing financial plans based on individual spending habits. It’s also generating loan options based on a user’s credit profile.”

Finastra plans to develop gen AI tools for customers in future, but they’ve started in-house with GENAI (X,) a gen AI-based system now rolled out to their 8,000 employees.

Insight Story: Karen Quinn (UK,) Senior Director, Brand and Corporate Communications at financial software providers, Finastra

Karen Quinn, Finastra

We look at gen AI from a human empowerment angle – giving people back time. These tools give space to reflect, imagine and create rather than just do, do, do.

Karen Quinn, Senior Director, Brand and Corporate Communications, Finastra

“We use it for things like understanding and reviewing contracts, predicting behaviors and optimizing workflows,” says Karen.

She believes that although only being used internally so far, their gen AI tools are benefiting customers. “We can only work with the data we have to make financial decisions. Anything that can generate new scenarios, pull in more data sources or enable federated learning can only benefit.”

Is gen AI a threat to jobs?

Shagun acknowledges employees worry AI might replace them. “Some of that anxiety may be justified – a Goldman Sachs report in March 2023 said AI could replace 300 million full-time jobs.” But getting to know gen AI better may help overcome that concern. “Lean in to the technology. Education and training is key. Start with courses like prompt engineering.”

Karen refers to a quote from economist and professor Richard Baldwin, “AI won’t take your job. It’s somebody using AI that will take your job.” In other words, it’s not the technology itself that’s a threat to jobs but failing to explore it and take full advantage of it.

She advises looking at the benefits of gen AI broadly. “This is not an efficiency play. People will become more productive, but hopefully that means work is more rewarding.”

Getting gen AI right

While generative AI offers much possibility, it also raises concerns around ethics, bias and data privacy. It is largely unregulated today and developing AI models can be resource-intensive.

Karen highlights Finastra’s caution. “We rigorously tested the products, then launched a full-scale training program and disabled other tools. They’re still going through testing, like panels to try and break them and make sure no sensitive data gets shared with the wrong audiences.”

She continues, “In brand and communications, there are copyright issues. If you’re using text-to-image prompts, where is it drawing data from? We need to be very careful.”

Materials used to train AI are often copyrighted, and whether people or businesses can copyright outputs of gen AI is subject to legal challenges in many countries, with some courts – including in the US and Italy – ruling no, at least in the case of visual arts.

Balancing gen AI optimism with realism

Shagun feels misinformation is the biggest business risk from gen AI. “It can fabricate ‘facts’ – what Google researchers call hallucinations.

She also recommends businesses pay attention to copyright. “Disputes have arisen between artists and AI companies over the value of human creativity.”

Data privacy and security also rate highly for Shagun. “Large language models are trained on vast swathes of internet data. There’s no data protection embedded in these systems by design or default. Training data can fail to include women, older people or marginalized groups – that’s an ethical challenge.”

Security and generative AI

Amin Hasbini is Kaspersky’s Head of Research Center for its Global Research and Analysis Team (GReAT,) Middle East, Turkey and Africa. He says many employees are already using freely available AI tools, potentially without their employers knowing. “In a 2023 Kaspersky study, we found 57 percent of workers are using generative AI to save time. This raises many security questions. What kind of data are they putting into it? Is it intellectual property, code or documents to summarize? That’s a major concern.”

Cybercriminals are also using generative AI to help them fool people with more realistic fake websites. In 2022, Kaspersky’s anti-phishing system blocked more than 500 million attempts to access fraudulent websites – a doubling of attempts in the previous year.

It’s likely cybercriminals are already using generative AI chatbot ChatGPT’s ability to produce convincing texts to create automated spearphishing attacks – phishing targeted at specific people.

To protect against generative AI-based threats, Amin advises, “It starts with awareness. Employees need to know their organization’s boundaries around putting data into AI websites. Businesses need enterprise cybersecurity solutions that allow monitoring and control of devices, systems and data used within the organization.”

Leading employee cybersecurity awareness approaches include implementing a cyber champions program or targeted education as used by Heathrow Airport.

Gen AI’s future potential

What kinds of applications for gen AI will we see businesses pursuing in the near future?

Shagun believes what businesses do today will decide the future of gen AI. “The world will look completely different by 2030. Global Data Plc estimates the AI market will be worth around 900 billion by 2030.

Gen AI will transform all aspects of our lives. This is a make-or-break time for industry leaders.

Shagun Sachdeva, Project Manager for Disruptive Tech, GlobalData Plc

But that doesn’t mean carelessly diving in. “Gen AI is no magic bullet. While the enthusiasm around it is justified, prudence is imperative. We need responsible innovation,” says Shagun.

Karen imagines gen AI helping to make markets fairer. “Gen AI has incredible potential to overcome inefficiencies, like small-to-medium business (SMB) access to trade finance. Gen AI might be able to generate or automate some of the processes for onboarding smaller companies into global trade, opening them up to a wider audience.”

She also thinks gen AI could help solve some of the problems it’s been known to introduce. “It can help identify bias through explainability and ensure we close feedback loops that cause biases. It could also look at efficiencies in supply chains, contracts and communications and marketing. It can take out some of the drudgery and enrich the processes.”

“We’re on the cusp of understanding what gen AI can do,” Karen says. “Who knows what the future brings? This could be game-changing in ways we can’t even imagine.”

Like Karen and Shagun, as a creative marketer, I’m drawn between mostly optimism and a little fear about what gen AI will bring. As I wrote following a gen AI talk for marketers where I heard about Karen’s exceptional AI program at Finastra, I’m optimistic we can do it right.

Having the training, technologies and policies to bring the whole organization on the journey, we can use this tech to free us by shifting mundane tasks down to these somewhat smart bots. But in that shift, we may narrow opportunities – particularly for talent entering our industries – to refine their craft and learn the difference between average and distinctive output. Refined talent that generative AI lacks. For now, anyway.

]]>
full large medium thumbnail