Openai has banned several accounts generated from China which were misusing their Chatgpt technology to develop AI-operated social media monitoring equipment. These accounts used chat, which was done to write sales pitch and debug code for a program designed to monitor anti-chinis spirit on platforms like X (East Twitter), Facebook, YouTube and Instagram. . The aim of this device is to identify calls to protest against human rights violations in China, which intends to share these insights with Chinese authorities. Additionally, the group used Chatgpt to generate fishing emails for customers in China.
This action is part of the widespread efforts of openiAI to prevent misuse of your AI model for malicious activities including monitoring and impact operations. In a related example, Openai banned accounts associated with North Korea, which generated fake resumes and online profiles to cheat employment in Western companies. Another case included a financial fraud operation in Cambodia, which was using Chatgpt to translate and generate comments on the social media platforms.
The US government has expressed concern about the potential use of AI technologies by the Halfist regime to suppress dissatisfaction and spread wrong information. Openai Active measures AI companies face challenges to identify and block accounts engaged in such activities that ensure that their technologies are not exploited for harmful purposes. As AI devices become rapidly accessible, the responsibility of monitoring and preventing their misuse remains an important priority for developers and policy makers.
In a similar vein, a Chinese AI Startup, Deepsek has faced regulatory action in various countries due to concerns on data secrecy and security. South Korea’s Personal Information Protection Commission suspended new downloads of AI apps of Dipsek, citing non-transportation with personal data security rules. The suspension will remain until the app meets the required privacy law standards, although the web service remains accessible. Additionally, Italy’s Data Protection Authority blocked Deepsek’s chatbot on privacy issues, showing the increasing global investigation of AI applications that can compromise user data.
These examples highlight global challenges in regulating AI technologies, using innovation with mandatory to protect and prevent abuse. As the AI continues to develop, companies and governments worldwide are struggling with the establishment of moral standards and human rights that promote technological progresses.
Thanks for reading..