Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
This week, a guilty plea for $37M stolen, a $3.8M Onyx hack, a first conviction for illegal crypto ATM operations, Zort owner fraud, WazirX's post-hack liability, U.S. congressmen ask for Binance exec's release, a U.S. court denied Tornado Cash exec's motion and a SEC-Mango Markets settlement.
OpenAI claims its new artificial intelligence model, designed to "think" and "reason," can solve linguistic and logical problems that stump existing models. Officially called o1, the model nicknamed Strawberry can deceive users and help make weapons that can obliterate the human race.
California Gov. Gavin Newsom on Sunday vetoed a hotly debated AI safety bill that would have pushed developers to implement measures to prevent "critical harms." The bill "falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks," Newsom said.
The United States on Thursday criminally charged an alleged key money laundering figure in the Russian cybercriminal underground on the same day Western authorities shut down virtual currency exchanges by seizing web domains and servers associated with Russian cybercrime.
This week, BingX, Truflation, OpenAI X account hacked; Germany shut 47 exchanges; Caroline Ellison sentenced; two got crypto theft charges; one got crypto scam fine; Banana Gun will refund victims; WazirX, Liminal in dispute; SEC settled with TrueCoin, TrustToken; CFTC may settle with Mango Markets.
Wednesday brought more turmoil in the top ranks of OpenAI after three executives in leadership positions quit the company at a time when the AI giant seeks to convert itself into a for-profit entity. The new structure may affect how the company prioritizes and addresses AI risks.
More than 100 tech companies including OpenAI, Microsoft and Amazon on Wednesday made voluntary commitments to conduct trustworthy and safe development of artificial intelligence in the European Union, with a few notable exceptions, including Meta, Apple, Nvidia and Mistral.
Using facts to train artificial intelligence models is getting tougher, as companies run out of real-world data. AI-generated synthetic data is touted as a viable replacement, but experts say this may exacerbate hallucinations, which are already a big pain point of machine learning models.
LinkedIn this week joined its peers in using social media posts as training data for AI models, raising concerns of trustworthiness and safety. The question for AI developers is not whether companies use the data or even whether it is fair to do so - it is whether the data is reliable or not.
This week, Delta Prime and Ethena were hacked, Lazarus' funds were frozen, the SEC settled with Prager Metis and Rari Capital, Sam Bankman-Fried sought a new trial, the SEC accused NanoBit and CoinW6 of scams, the CTFC sought to fight pig butchering, and Wormhole integrated World ID and Solana.
If the bubble isn't popping already, it'll pop soon, say many investors and close observers of the AI industry. If past bubbles are a benchmark, the burst will filter out companies with no solid business models and pave the way for more sustainable growth for the industry in the long term.
California enacted regulation to crack down on the misuse of artificial intelligence as Gov. Gavin Newsom on Tuesday signed five bills focused on curbing the impact of deepfakes. The Golden State has been on the national forefront of tech regulation.
This week, Indodax was hacked, Angel Drainer resurfaced, Russia developed Infra crypto, GS Partners settled with U.S. states, Caroline Ellison to be sentenced Sept. 24, FCA prosecuted first unregistered crypto case, Nigeria set new crypto regulations, and India may approve offshore crypto firms.
The U.S. federal government is preparing to collect reports from foundational artificial intelligence model developers, including details about their cybersecurity defenses and red-teaming efforts. The Department of Commerce said it wants thoughts on how data should be safety collected and stored.
The underground market for illicit large language models is a lucrative one, said academic researchers who called for better safeguards against artificial intelligence misuse. "This laissez-faire approach essentially provides a fertile ground for miscreants to misuse the LLMs."
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.co.uk, you agree to our use of cookies.