Chatgpt Search no longer requires an account
This week was very big for Openai, the company has created Chat search Available to all users. The service was previously closed behind a requirement: an Openai account, no longer you need to sign up to an account to reach the AI-operated search engine. Users who want to use Chatgpt to find the web, they can go to chatgpt.com and enter their questions in the chat box, and click on the search button that appears below it. The Chatgpt Search will present the result of how its chatbot answers your questions. The results are displayed as a rich snipet with a brief text summary, and an image from the link from which it was obtained. A significant difference between Chatgpt and Chatgpt Search is that the latter draws data from the Internet in real time, as contrary to being trained on a specific data set.
Although this search is a good way to test the engine’s abilities, it has some limitations, such as not being able to look at your discovery history. AI-driven search engine appears to compete with Google Search, Bing, Dakdakgo and the choice of brave search.
Chat on WhatsApp Now the voice supports messages and images. Users can sign in to their account to reach the premium features available for Chatgpt Plus and Pro.
Openai started deep research for chat
Openai has ignored Intensive research For Chatgpt users. It is a state -of -the -art technology capable of offering complex, intensive research in various subjects. The AI uses the latest O3 model, and has been trained using a multi -layered nervous network to analyze patterns, decide by argument and perform human tasks.
Users have to select “Deep Research” in chat musicians, so that an intensive discovery can be made. However, it does not immediately respond to the query like a regular chat. Intensive research is quite slow, and it may take up to 30 minutes to respond. Deep Research is not currently free, it is currently available Chat for Customer. There is a limit of 100 questions per month. Openai has admitted that this feature is not innocent, it can result in mistakes, or misunderstanding sources, and the report formatting is not correct. The company is working on improving the facility by offering support for data visualization, embedded images and more advanced data sources.
Google’s Gemini 2.0 is here
Google has released Gemini 2.0 Flash for all users. Gemini is now more efficient than ever and also provides better performance. The new AI-model is available on mobile devices and through the Gemini app on the web.
Gemini 2.0 Pro Experiment is now available in Mithun app for advanced users -Google AI Studio, Wartax AI. Experimental models can handle complex signals and coding, thanks to its 2 million-token reference window. It is capable of large data sets, understanding and deepest analysis of logic. Google has also launched the Gemini 2.0 flash-light, which is its highest cost-skilled AI model. It has 1 million-token reference window, and supports multimodal abilities, and can handle tasks such as caption for thousands of images. Google is working to add more multimodal functions to Gemini, such as image generation and text-to-skip capacity.
Gemini is a proper part of problems, as Google accepted Hackers are using AI-mangoing equipment for cybercrime activities such as research on weaknesses, drafting fishing operations and gathering intelligence on defense organizations.
Google Search soon to get AI-Operated Assistant Facilities
Google search will be found soon AI-managed facilitiesThe first of the employed features includes an option to ask an follow -up question, allowing users to interact with the service that no one can use AI accessory. Google’s upcoming features include Project MerinerWho is an AI agent who can navigate a browser, click on the button and fill the form on its own.
Openai is not the only one that has an intensive research tool, Google has its own “Gemini Deep Research”, designed to help users in their academic or professional needs, in which source links, major conclusions Includes, and can export material to Google Docs.
YouTube is testing 4x playback speed for some reason
Which seems to be a bizarre step, is testing an option to play a video on YouTube. 4x playback speedSo far, YouTube has allowed users to play videos at the maximum speed of normal time. Google is also testing a feature called “Jump forward” to help the audience get their favorite content quickly. YouTube has added an audio quality growth with the support of 256kbps bitrate in music video. IOS users can now see YouTube shorts in picture-in-picture mode, while they are multitasks with other apps. Google has also enabled smart downloads to automate video downloads for shorts, so that users can be allowed to watch videos when they are offline.
YouTube 4X playback speeds are currently available to premium customers. It is not clear why this feature was designed, and only time will be able to tell us whether it is popular among users. Experimental features like the people described above are temporarily tested, before they are removed either, or released to all users.
Sparkcat Mobile Malware used infected apps and OCR to steal data
Safety researcher Kaspersky It turns out that Google Play Store, Apple App Store and Therd-party channels are using a new method to steal malicious app data on party channels. The new malware-strain called Sparkcat, which was active since April 2024, was written in the rust programming language and was a malicious SDK. When a user installed an infected app on his phone, it will quietly install an OCR (optical character recognition) plugin on the device. Malware will use it to scan the images on the user’s phone to recover the code for cryptocurrency. These malware were designed to target Android and iPhone users in Europe and Asia, and the apps were working in many countries.
Google blocked 2.3 million risky apps in 2024. Apple is Confirmed It pulled 11 apps from the iOS App Store including iPhcom, Wetink, and Anygpt to protect the iPhone users from the sparkcat malware attacks.