Sam Altman, the co-founder and CEO of OpenAI, delivered a keynote address during the Italian Tech Week 2024, held at the OGR Officine Grandi Riparazioni in Turin, Italy, on September 25, 2024. During this event, he introduced OpenAI's latest advancements in artificial intelligence technology, focusing on the company's newly launched models designed to enhance visual reasoning capabilities.

OpenAI has officially released its most recent artificial intelligence model, named o3. This innovative model is touted as being capable of thinking with images, which signifies that it can analyze and comprehend a users sketches and diagrams effectively, even if those images lack high resolution or clarity. This marks a significant leap forward in the realm of AI, where understanding visual input has often been a challenging task.

The o3 model is accompanied by a smaller version known as o4-mini. This announcement comes on the heels of the earlier release of OpenAI's first reasoning model, referred to as o1, which debuted in September and was primarily designed to tackle complex problem-solving scenarios using a multi-step deliberative process.

With o3, users are empowered to upload a variety of visual content, including whiteboards and sketches, allowing the AI to engage in detailed analysis and provide discussions on the images submitted. The models are also equipped with advanced functionalities, allowing them to rotate, zoom in, and utilize various image-editing tools, enhancing the interactivity and usability of the platform.

Since the introduction of the highly popular ChatGPT chatbot in late 2022, OpenAI has been on a fast track to upgrade its capabilities, extending beyond mere text processing to encompass images, voice recognition, and video analysis. The company is in a relentless race to maintain its competitive edge in the generative AI landscape, where it faces stiff competition from major players such as Google, Anthropic, and Elon Musk's xAI.

OpenAI emphasized that for the first time, its reasoning models are capable of independently utilizing all the tools available in ChatGPT, which include web browsing, Python programming, image comprehension, and image generation. This multi-faceted approach significantly bolsters their ability to solve intricate, multi-step problems more efficiently and marks a stride towards more autonomous AI operations.

The recent models o3 and o4-mini are particularly noteworthy as they represent OpenAIs first foray into AI models that can think with images. According to OpenAI, this advancement means that these models do not simply recognize an image; they integrate visual information directly into their reasoning processes, thereby enhancing their analytical capabilities.

In addition to this, last month, OpenAI rolled out a native image-generation feature that gained rapid popularity online for its ability to create Studio Ghibli-style anime images, showcasing the creative potential of AI in visual arts.

OpenAI has tailored the o3 model specifically for tasks related to mathematics, coding, and scientific analysis, while the o4-mini version is optimized for speed and cost efficiency. Both models became available starting Wednesday to users subscribed to ChatGPT Plus, Pro, and Team memberships.

The AI community has often poked fun at the peculiar naming conventions used for OpenAI's models. In a lighthearted twist, CEO Sam Altman participated in this ongoing banter, suggesting via a post on X that the company should consider renaming its models by summer, humorously acknowledging the jest and criticism they have received.

Furthermore, OpenAI assured users that both models have undergone rigorous stress testing under the companys most stringent safety protocols to date. This testing aligns with their updated Preparedness framework, released earlier this week, which is designed to ensure the safety and reliability of their AI systems.

However, OpenAI has recently faced scrutiny regarding its safety measures and policies. In its latest statement, the company indicated that it reserves the right to adjust its safety requirements should another frontier AI developer release a high-risk system without comparable safeguards. This cautionary approach reflects the ongoing challenges surrounding AI safety and regulatory compliance.

In a notable policy shift, OpenAI announced that it would no longer mandate safety tests for certain fine-tuned models. Additionally, the company has opted not to publish a model card a report that typically details the safety assessments conducted prior to a model's release for its GPT-4.1 model. Earlier this year, OpenAI launched its AI agent tool, Deep Research, just weeks before making its system card public.

As of now, OpenAI has not responded to requests for further comments on these developments.

WATCH: OpenAI is contemplating the creation of a social network.