Your new post is loading...
Your new post is loading...
|
Scooped by
Gust MEES
|
An AI threat guide, outlining cyberattacks that target or leverage machine learning models, was published by the National Institute of Standards and Technology (NIST) on Jan. 4.
The nearly 100-page paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” provides a comprehensive overview of the cybersecurity and privacy risks that come with the rapid development of both predictive and generative AI tools over the last few years.
|
Scooped by
Gust MEES
|
Researchers jailbreak AI chatbots, including ChatGPT Like a magic wand that turns chatbots evil.
|
Scooped by
Gust MEES
|
In den Stunden vor der Abschaltung von ChatGPT, war es demnach für einige Benutzer möglich, den Vor- und Nachnamen, die E-Mail- und Zahlungsadresse, die letzten vier Ziffern der Kreditkartennummer und das Ablaufdatum der Kreditkarte eines anderen aktiven Benutzers zu sehen. Die vollständigen Kreditkartennummern seien zu keinem Zeitpunkt offengelegt worden.
|
Scooped by
Gust MEES
|
|
Scooped by
Gust MEES
|
IF YOU DON'T have enough to worry about already, consider a world where AIs are hackers.
Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.
As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage. Learn more / En svoir plus / Mehr erfahren: https://www.scoop.it/t/securite-pc-et-internet/?&tag=AI
|
Scooped by
Gust MEES
|
There are many forces that are shaping the future of the internet today, from artificial intelligence (AI) and cyberthreats to the internet of things (IoT) and the rising role of government—all of which impact key areas, including digital divides, personal freedoms and rights, as well as media and society.
The internet Society (ISOC)’s 2017 Global internet Report found for example that AI and IoT, for all of their benefits to people’s personal and work lives, could result in a “surveillance society.” Therefore, ethical considerations should steer technology development and guide its use.
The survey also found wide-ranging fears that there are significant forces at work that may undermine the promise of the internet for future generations. For example, many believe that internet freedom will continue to decline around the world due to widespread surveillance, internet shutdowns and content regulation. At the same time, cybersecurity issues will pressure governments to take decisions that could erode the open and distributed global governance of the internet. Measures that may be intended to secure cyberspace may undermine personal rights and freedoms. Without a change of course, online freedoms may be nearing a point of irreversible decline, ISOC found. Learn more / En savoir plus / Mehr erfahren: http://www.scoop.it/t/securite-pc-et-internet/?&tag=AI http://www.scoop.it/t/securite-pc-et-internet/?&tag=iot
|
Scooped by
Gust MEES
|
Despite spending more money on security than ever, organisations struggling with a widespread cybersecurity skills gap are often told how technologies like big data, analytics, machine learning, and artificial intelligence can aid them in protecting their data or critical infrastructure from attackers.
Organisations ranging from startups to established large corporations are investing in the building of AI systems to bolster defences by analysing vast amounts of data and helping cybersecurity professionals identify far more threats than would be possible if they were left to do it manually.
But the same technologies that improve corporate defences could also be used to attack them. Learn more / En savoir plus / Mehr erfahren: http://www.scoop.it/t/securite-pc-et-internet/?tag=Cyberattacks http://www.scoop.it/t/securite-pc-et-internet/?tag=Artificial+Intelligence
|
Scooped by
Gust MEES
|
Funded by DARPA, the government agency that commissions far-out research for the US Department of Defense, this is the Cyber Grand Challenge.
The computers are competing to be the best at a tedious and challenging task that human cybersecurity researchers do every day: find a bug in a software program, then fix it. Right now there aren't enough skilled people to do that job, so this technology could take pressure off IT departments everywhere struggling to stay on top of vulnerabilities in their computer systems.
The number of vulnerabilities in computer software running in the world is impossible to know. Cybersecurity firm Symantec estimated in its 2016 report on internet security threats that researchers across the industry found more than 5,500 new vulnerabilities in 2015 alone. Those bugs tend to stick around, as programmers copy-paste outdated software into new products, and users like you and me forget to update our software.
This new technology, experts say, will also give cyberdefenders a much needed advantage in a war that right now heavily favors the bad guys. It's much easier to find one bug and exploit it than to defend against every single possible weakness in a computer system. This competition hopes to flip that script.
|
|
Scooped by
Gust MEES
|
ChatGPT est victime d’une nouvelle faille de sécurité. En exploitant cette brèche, il est possible d’extraire des données sensibles concernant des individus en s’adressant au chatbot d’OpenAI.
|
Scooped by
Gust MEES
|
Kriminelle lieben ChatGPT – und Europol hat konkrete Beispiele dafür entdeckt Was Beobachter:innen vorhergesagt haben, ist eingetreten: ChatGPT kommt längst bei kriminellen Machenschaften zum Einsatz, wie Europol warnt – und sei es nur zur Recherche für Verbrechen. Doch das Gefahrenpotenzial geht weit darüber hinaus.
|
Scooped by
Gust MEES
|
A ChatGPT bug found earlier this week also revealed user's payment information, says OpenAI(Opens in a new tab).
The AI chatbot was shut down on March 20, due to a bug that exposed titles and the first message of new conversations from active users' chat history to other users.
Now, OpenAI has shared that even more private data from a small number of users was exposed.
"In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date," said OpenAI. "Full credit card numbers were not exposed at any time.
|
Scooped by
Gust MEES
|
As 2022 ended, OpenAI made ChatGPT live to the world. It is an artificially intelligent research and deployment chatbot that interacts through text using realistic human responses. Its deep learning techniques can generate conversations that convince anyone they are interacting with an actual human.
Like opening the jar and releasing the genie, its impact is relatively unknown, but grave intrigue and curiosity surrounded it. How will it be used; how does it work; is it for good or evil? No, this is not the next Terminator sequel…
Its intentions are certainly for positive use, and its articulate responses have led many to claim it as the best chatbot to be released. However, in a short period, ChatGPT has already been linked to cyber threats as cyber-criminals leverage its advanced capabilities for nefarious means. Learn more / En savoir plus / Mehr erfahren: https://www.scoop.it/topic/securite-pc-et-internet/?&tag=ChatGPT
|
Scooped by
Gust MEES
|
Das Start-up Clearview AI war der Öffentlichkeit bisher so gut wie unbekannt, und das war durchaus so gewollt. Die kleine Firma hat eine offenbar gut funktionierende, aber auf zahlreichen Ebenen problematische Gesichtserkennungstechnologie an Hunderte Polizeibehörden in den USA verkauft. Nun hat die "New York Times" dafür gesorgt, dass Clearview zum Inbegriff aller Befürchtungen wird, die mit Gesichtserkennung einhergehen. Die Zeitung schreibt vom potenziellen "Ende der Privatsphäre, wie wir sie kennen".
Revolutionär ist an der Technik eigentlich nichts, sie besteht aus lauter Versatzstücken, die es anderswo auch schon gibt. Aber im Zusammenspiel funktioniert sie so gut, dass Ermittler die Software gern und nach eigenem Bekunden auch erfolgreich einsetzen.
Sie müssen dazu nur ein einziges Bild eines Gesuchten bei Clearview hochladen – egal, ob das Bild frontal aufgenommen wurde und ob die Person zum Beispiel eine Sonnenbrille oder einen Hut trägt. Das Bild wird in ein mathematisches Modell des Gesichts umgerechnet, so wie es im Prinzip auch Apples Gesichtserkennung Face ID macht. Dieses Modell wird gegen eine Datenbank abgeglichen - und die hat es in sich: Angeblich besitzt die Firma Clearview eine Sammlung aus drei Milliarden Fotos, die sie ohne Erlaubnis von Facebook, Instagram, YouTube "und Millionen anderen Websites" per Scraping heruntergeladen haben soll.
Diese Fotos werden ebenfalls in mathematische Modelle umgewandelt und bei hinreichender Ähnlichkeit zum hochgeladenen Bild als mögliche Treffer angezeigt, mitsamt den Links zu den jeweiligen Quellen. Das ermöglicht eine schnelle Identifizierung. Learn more / En savoir plus / Mehr erfahren: http://www.scoop.it/t/securite-pc-et-internet/?&tag=Facial+Recognition
|
Scooped by
Gust MEES
|
|
Scooped by
Gust MEES
|
As artificial intelligence becomes more and more sophisticated, Palmer told Business Insider in an interview at the FT Cybersecurity Summit in London in September, it will inevitably find its way into malware — with potentially disastrous results for the businesses and individuals that hackers target.
It's important to remember that Palmer is in the security business: It's his job to hype up the threats out there (present and future), and convince customers that Darktrace is the only one that can save them. It's a $500 million (£401 million) British firm, with an AI-driven approach to defend networks. It creates an "immune system" for customers that learns how businesses operate then monitors for potential irregularities. Malware will learn to mimic people you know Google has taught neural network AI to play Go — but the tech could also be used for far more nefarious ends. Mustafa Suleyman/Twitter
Using recurring neural networks, it's already possible to teach AI software to mimic writing styles — whether that's clickbait viral news articles or editorial columns from The Guardian. Palmer suggests that in the future, malware will be able to look through your correspondence, learn how you communicate, and then mimic you in order to infect other targets. Learn more / En savoir plus / Mehr erfahren: http://www.scoop.it/t/securite-pc-et-internet/?tag=AI
|
An AI threat guide, outlining cyberattacks that target or leverage machine learning models, was published by the National Institute of Standards and Technology (NIST) on Jan. 4.
The nearly 100-page paper, titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” provides a comprehensive overview of the cybersecurity and privacy risks that come with the rapid development of both predictive and generative AI tools over the last few years.
Learn more / En savoir plus / Mehr erfahren:
https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=ChatGPT
https://www.scoop.it/t/21st-century-innovative-technologies-and-developments/?&tag=AI
https://www.scoop.it/topic/21st-century-innovative-technologies-and-developments/?&tag=Ethics