The <a href="https://www.thenationalnews.com/business/technology/2023/03/13/chatgpt-4-release-date-when-is-the-new-ai-program-out/" target="_blank">highly anticipated</a> update to ChatGPT was released on Tuesday by OpenAI, with immediate access available on Microsoft's search engine Bing Chat and a waiting list opened to developers. GPT-4 is “less capable than humans in many real-world scenarios”, OpenAI said <a href="https://openai.com/research/gpt-4" target="_blank">in a blog post</a>, but it represents “the latest milestone in OpenAI’s effort in scaling up deep learning”. Still, by the company's own admission, “GPT-4 poses similar risks as previous models, such as generating harmful advice, buggy code or inaccurate information”. Microsoft has <a href="https://www.thenationalnews.com/business/technology/2023/01/10/microsoft-in-talks-to-invest-10-billion-in-chatgpt-owner-openai-reports-say/">poured more than $11 billion into OpenAI</a> under its partnership with the company as it races to integrate AI into its products and services. On the same day GPT-4 was announced, technology newsletter Platformer reported that Microsoft laid off its entire ethics and society team as part of a massive round of redundancies that affected 10,000 employees across the company. According to Platformer, this means Microsoft no longer has a dedicated staff to align AI principles — such as a lack of bias and user safety — into its product developments. Microsoft still has an Office of Responsible AI, which sets the rules for the company’s AI efforts. Platformer<i> </i>reported that the company says “its overall investment in responsibility work is increasing despite the recent layoffs”. Since its launch in November, ChatGPT's popularity has surged as traffic to the site hit more than one billion visits, up from 616 million in January, according to Similarweb estimates. The updated version is a large multimodal model, meaning it can generate text responses, as well as accept images and text inputs from users — an improvement over predecessor GPT-3.5, which only accepted text. The new version exhibits “human-level performance on various professional and academic benchmarks”. For example, GPT-4 scores in the 90th percentile on a practice bar exam, the qualifying test for lawyers while by comparison, GPT-3.5’s score was in the bottom 10 per cent. GPT-4 has already been around in one application for more than a month. Microsoft said that Bing Chat, its chatbot co-developed with OpenAI, had been running on GPT-4 for the last five weeks. The company will host an event on March 16 to discuss “reinventing productivity with AI”. The company has so far announced AI updates for its popular Windows operating system and search engine Bing, but not yet for its Office productivity suite, which includes Word and Excel. Elsewhere, GPT-4 is available now to OpenAI’s paying users on ChatGPT Plus, where developers can go on a waiting list to be added to the platform. But there is a usage cap and text prompts are only available; the company said image inputs are still being tested. OpenAI is also inviting researchers “studying the societal impact of AI or AI alignment issues” to apply for subsidised access on its <a href="https://share.hsforms.com/1b-BEAq_qQpKcfFGKwwuhxA4sk30">Researcher Access Programme</a>. Other companies are also already using GPT-4, which OpenAI touted in its announcement. Payments platform Stripe is using GPT-4 to scan business websites and deliver a summary to customer support staff. Language-learning app Duolingo built GPT-4 into a new premium subscription tier that supports role-play conversations and the ability to chat to a bot to get more insight into why an answer was right or wrong. Also in the education arena, online learning company Khan Academy is using GPT-4 to make an automated tutor. US bank Morgan Stanley is developing a tool powered by GPT-4 that can retrieve specific information in company documents to assist financial analysts. “Despite its capabilities, GPT-4 has similar limitations as earlier GPT models,” OpenAI said. The company says it is “still is not fully reliable” and prone to stating incorrect facts, making reasoning errors and displaying bias. OpenAI cautioned against using to in “high-stakes contexts” and advised human review, providing additional context or “avoiding … uses altogether”. While acknowledging the issues, OpenAI said GPT-4 had reduced its errors relative to previous models. GPT-4 scores 40 per cent higher than GPT-3.5 on certain evaluations, according to the company.