MSP at the forefront against credential stuffing

MSP at the forefront against credential stuffing

Credential stuffing has been around for a while, and it is exactly what it sounds like: an attack in which hackers use a cache of compromised usernames and passwords to break into a system. However, hackers have recently found new ways to make it more effective, namely the arrival of artificial intelligence (AI), which allows for a far more algorithmic-driven strategy. These types of attacks are on the rise because hackers have new AI-driven tools. The 2024 Verizon Data Breach Investigations Report (DBIR) states that external actors perpetrated 83 percent of breaches. Of these breaches, 49 percent involved the use of stolen credentials. Cybercriminals often find lists of usernames and passwords on the dark web or as a by-product of a previous cyber-attack. For example, www.HaveIBeenPwned.com has tracked over 8.5 billion compromised credentials from over 400 data breaches. Notable attacks Some notable, recent credential stuffing attacks include: Dunkin’: Dunkin’ and its customers were victims of many credential-stuffing attacks beginning in 2015. New York State sued the doughnut and coffee chain, and now Dunkin’ will be required to maintain safeguards to protect against similar attacks in the future. They will also have to follow incident response procedures when an attack occurs and pay $650,000 in penalties and costs to the state of New York. Norton: In January 2023, Norton Lifelock Password Manager was hit with a brute-force credential stuffing attack. Threat actors used stolen credentials to log into customer accounts and access their data. Over 925,000 people were targeted in this attack. Hot Topic: American retailer Hot Topic disclosed in March 2024 that two waves of credential stuffing attacks in November 2023 exposed affected customers’ personal information and partial payment data. The Hot Topic fast-fashion chain has over 10,000 employees in more than 630 store locations across the U.S. and Canada, the company’s headquarters, and two distribution centers. Roku: Roku warned in April 2024 that 576,000 accounts were hacked in new credential stuffing attacks after disclosing another incident that compromised 15,000 accounts in early March of 2024. The company said the attackers used login information stolen from other online platforms to breach as many active Roku accounts as possible in credential-stuffing attacks. These are just a handful of high-profile examples. Most credential-stuffing attacks occur outside of the media glare, day after day, in offices and enterprises worldwide. .

Why Your Foundation AI Services Are Unsafe: Expert Analysis

Why Your Foundation AI Services Are Unsafe: Expert Analysis

From OpenAI and Google to Microsoft, the entire AI developer community uses public vector databases and large language model (LLM) services — yes, accessible by anyone. Despite being vital components of the AI supply chain, vector databases and LLM services are often overlooked in terms of cybersecurity. Now a new study from Legit Security found that these elements are ripe with vulnerabilities, data breaches, cybersecurity risks, and exposed sensitive data.  

Meta Wants To Get Small With Its AI Language Models

Meta Wants To Get Small With Its AI Language Models

While large language AI models like ChatGPT, Gemini, and Llama dominate the headlines, Meta is shifting focus to small language models. According to a recently published paper by Meta’s research team, the company is betting on these smaller models as the future of AI.

AI for Everyone: How Small Language Models are Revolutionizing Accessibility and Sustainability

Language Models are Revolutionizing Accessibility and Sustainability

Artificial Intelligence (AI) has revolutionized industries, with Large Language Models (LLMs) at the forefront. These powerful systems, like ChatGPT, Google Gemini, and Microsoft's Co-Pilot, drive the cutting-edge of AI capabilities. However, their extensive energy and cost demands, requiring significant data center resources, pose challenges in scalability and accessibility, especially for global end-users.

The Biden Administration’s AI Regulation Stance

The Biden Administration’s AI Regulation Stance

In a move that has triggered a whirlwind of responses, the Biden Administration has decided not to immediately regulate the development of AI. The revelations came in a report from the US Department of Commerce’s National Telecommunications and Information Administration. The report clearly states that “the government will not be immediately restricting the wide availability of open model weights.”

White House opts to not add regulatory restrictions on AI development – for now

White House opts to not add regulatory restrictions on AI development – for now

The Biden Administration on Tuesday issued an AI report in which it said it would not be “immediately restricting the wide availability of open model weights [numerical parameters that help determine a model’s response to inputs] in the largest AI systems,” but it stressed that it might change that position at an unspecified point.

Meta Wants To Get Small With Its AI Language Models

While large language AI models continue to make headlines, small language models are where the action is. At least, that’s what Meta appears to be betting on, according to a paper recently released by a team of its research scientists.

While large language AI models continue to make headlines, small language models are where the action is. At least, that’s what Meta appears to be betting on, according to a paper recently released by a team of its research scientists.

AI Game-Changers: ChatGPT vs. Llama

AI Game-Changers: ChatGPT vs. Llama

Artificial intelligence enjoyed a breakout year in 2023 as improvements to machine learning and natural language processing made AI much more practical to use...

Tech leaders sound off on new AI regulations

Conrado/Shuttershock

Last month, the Biden administration issued a sweeping executive order focusing on artificial intelligence. The edict particularly focused on privacy concerns and the potential for bias in AI-aided decision-making. Either could potentially violate citizens’ civil rights. The executive order was a tangible indication that AI is on the government’s regulatory radar.