OpenAI Launches Developer APIs: Responses, Web Search, and Computer Use

- Authors
- Published on
- Published on
In a thrilling announcement, Sam Witteveen unveils OpenAI's groundbreaking APIs tailored for developers, bridging the gap in their offerings. The star of the show is the Responses API, a game-changer that serves as a one-stop-shop for a plethora of tools and settings, from image to web search functionalities. This API serves as the golden ticket for developers to tap into OpenAI's cutting-edge models with unparalleled ease and efficiency. While the completions and chat APIs remain stalwarts in the lineup, the assistant API is set to bid adieu in mid-2026, signaling a strategic shift towards more popular options.
OpenAI's new Responses API is a versatile powerhouse, supporting a wide array of features such as text, image, web search, file search, function calling, and reasoning models. The web search tool, a standout addition, empowers users to delve into the depths of the internet directly from OpenAI's platform, delivering natural language results and direct links to relevant articles. Pricing for this game-changing tool starts at a modest $30 per 1000 calls, with varying rates based on the context size, making it an enticing proposition for developers looking to elevate their projects.
Furthermore, the file search tool revolutionizes the way users interact with uploaded files, boasting added metadata and citation features for a seamless experience. OpenAI also introduces Computer Use, the driving force behind their Operator agent, offering users the ability to input tasks for completion using a browser and internet connectivity. While this feature is currently exclusive to the Chat GPT Pro Plan in the United States, it showcases OpenAI's commitment to pushing boundaries and empowering developers to unlock the full potential of AI technology. The team's dedication to providing accessible APIs underscores their mission to democratize advanced AI capabilities and drive innovation in the developer community.

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube

Image copyright Youtube
Watch OpenAI - NEW API & Agent Tools Breakdown on Youtube
Viewer Reactions for OpenAI - NEW API & Agent Tools Breakdown
Ways to disable tracing with one line of code and custom tracing options are available
New API has interesting features
Preference for using Gemini and Claude in apps over OpenAI
Excitement to use AI agent with OpenAI
Curiosity about Agents SDK and comparison with other frameworks like LangGraph
Operator is based off of o1
Concerns about OpenAI using data from API calls
Comparison of OpenAI with Google and Anthropic
Lack of trust in Sam Altman and skepticism about the new API being a vendor lock-in
Question about having to upload all files to OpenAI servers to use File Search API
Related Articles

Revolutionizing Instruction Following: Open AI's Image Generation Model Unleashed
Discover how open AI's latest image generation model revolutionizes instruction following, sparking creativity with Studio Ghibli-style images and mind maps. Explore its advanced capabilities and potential for innovative applications.

Unveiling Quen 2.5 Omni: Revolutionizing AI with Multimodal Capabilities
Explore the cutting-edge Quen 2.5 Omni model, an open-source multimodal AI marvel allowing text, audio, video, and image inputs with precise outputs. Witness its innovative architecture, unique features, and seamless performance in revolutionizing the AI landscape.

Introducing Gemini 2.5 Pro: Enhanced Thinking & Coding Capabilities
Discover the latest Gemini 2.5 Pro model from Sam Witteveen, showcasing enhanced thinking capabilities and improved performance. Explore its coding prowess and structured reasoning process in this innovative release.

Nvidia GTC 2025: Unveiling Llama Neotron Super 49b V1 and Model Advancements
Nvidia unveils reasoning models at GTC 2025, including llama neotron super 49b V1. Explore post-training dataset and API access for model testing. Compare 49b and 8b models' performance and discuss local versus cloud model usage. Exciting developments in reasoning model technology.