information analyst
43.8K views | +0 today
Follow
information analyst
km, ged / edms, workflow, collaboratif
Your new post is loading...
Your new post is loading...
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

ChatGPT : une faille de sécurité menace la vie privée sur Internet

ChatGPT : une faille de sécurité menace la vie privée sur Internet | information analyst | Scoop.it

ChatGPT est victime d’une nouvelle faille de sécurité. En exploitant cette brèche, il est possible d’extraire des données sensibles concernant des individus en s’adressant au chatbot d’OpenAI.

 

 
 
 

Via Gust MEES
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

A New Attack Impacts ChatGPT—and No One Knows How to Stop It

A New Attack Impacts ChatGPT—and No One Knows How to Stop It | information analyst | Scoop.it

CHATGPT AND ITS artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once.

 

 
 
 

Via Gust MEES
Gust MEES's curator insight, August 3, 2023 9:13 AM

CHATGPT AND ITS artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once.

 

 
 
 
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

Abo-Malware: Googles und Apples Stores von teuren ChatGPT-Fakes geflutet 

Abo-Malware: Googles und Apples Stores von teuren ChatGPT-Fakes geflutet  | information analyst | Scoop.it

Abo-Malware: Googles und Apples Stores von teuren ChatGPT-Fakes geflutet
Sophos warnt vor ChatGPT-Nachahmer-Apps in Apples und Googles App-Stores, die arglose Nutzer mit verschleierten Gebühren abzocken.

 

 
 
 

Via Gust MEES
Rescooped by michel verstrepen from 21st Century Innovative Technologies and Developments as also discoveries, curiosity ( insolite)...
Scoop.it!

Researchers make ChatGPT generate malware code

Researchers make ChatGPT generate malware code | information analyst | Scoop.it

We know that the popular ChatGPT AI bot can be used to message Tinder matches. It can also turn into a swankier version of Siri or get basic facts completely wrong, depending on how you use it. Now, someone has used it to make malware.

In a new report by the security company CyberArk(Opens in a new window) (and reported by InfoSecurity Magazine(Opens in a new window)), researchers found that you can trick ChatGPT into creating malware code for you. What’s worse is that said malware can be difficult for cybersecurity systems to deal with.

 

 

Via Gust MEES
Gust MEES's curator insight, January 25, 2023 8:16 AM

We know that the popular ChatGPT AI bot can be used to message Tinder matches. It can also turn into a swankier version of Siri or get basic facts completely wrong, depending on how you use it. Now, someone has used it to make malware.

In a new report by the security company CyberArk(Opens in a new window) (and reported by InfoSecurity Magazine(Opens in a new window)), researchers found that you can trick ChatGPT into creating malware code for you. What’s worse is that said malware can be difficult for cybersecurity systems to deal with.

 

 
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

Researchers Leverage ChatGPT to Expose Notorious macOS Malware

Researchers Leverage ChatGPT to Expose Notorious macOS Malware | information analyst | Scoop.it

Russian hackers and cybercrime forums are notorious for exploiting critical infrastructure. Last month, Hackread.com exclusively reported that a Russian-speaking threat actor was selling access to a US military satellite. Now, researchers have identified macOS malware being sold for $60,000.

 

 
 
 
 

Via Gust MEES
Gust MEES's curator insight, August 3, 2023 12:57 PM

Russian hackers and cybercrime forums are notorious for exploiting critical infrastructure. Last month, Hackread.com exclusively reported that a Russian-speaking threat actor was selling access to a US military satellite. Now, researchers have identified macOS malware being sold for $60,000.

 

 
 
 
 
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

Researchers jailbreak AI chatbots like ChatGPT, Claude

Researchers jailbreak AI chatbots like ChatGPT, Claude | information analyst | Scoop.it

Researchers jailbreak AI chatbots, including ChatGPT
Like a magic wand that turns chatbots evil.

 

 
 
 
 

Via Gust MEES
Rescooped by michel verstrepen from ICT Security-Sécurité PC et Internet
Scoop.it!

The ChatGPT bug exposed more private data than previously thought

The ChatGPT bug exposed more private data than previously thought | information analyst | Scoop.it

A ChatGPT bug found earlier this week also revealed user's payment information, says OpenAI(Opens in a new tab).

The AI chatbot was shut down on March 20, due to a bug that exposed titles and the first message of new conversations from active users' chat history to other users.

Now, OpenAI has shared that even more private data from a small number of users was exposed.

"In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date," said OpenAI. "Full credit card numbers were not exposed at any time.

 

 
 

Via Gust MEES
Gust MEES's curator insight, March 24, 2023 3:54 PM

A ChatGPT bug found earlier this week also revealed user's payment information, says OpenAI(Opens in a new tab).

The AI chatbot was shut down on March 20, due to a bug that exposed titles and the first message of new conversations from active users' chat history to other users.

Now, OpenAI has shared that even more private data from a small number of users was exposed.

"In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date," said OpenAI. "Full credit card numbers were not exposed at any time.

 

 
 
Angela Gold's comment, March 24, 2023 9:43 PM
look nice