{"id":825556,"date":"2025-02-09T07:11:55","date_gmt":"2025-02-09T13:11:55","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2025\/02\/09\/openais-deep-research-bytedances-video-ai-and-metas-ai-framework-this-weeks-ai-launches\/"},"modified":"2025-02-09T07:11:55","modified_gmt":"2025-02-09T13:11:55","slug":"openais-deep-research-bytedances-video-ai-and-metas-ai-framework-this-weeks-ai-launches","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2025\/02\/09\/openais-deep-research-bytedances-video-ai-and-metas-ai-framework-this-weeks-ai-launches\/","title":{"rendered":"OpenAI&#8217;s deep research, ByteDance&#8217;s video AI, and Meta&#8217;s AI framework: This week&#8217;s AI launches"},"content":{"rendered":"<div>\n<div>\n<figure id data-id=\"add5df8778bd2ac127e24c206bda9a99\" data-recommend-id=\"image:\/\/add5df8778bd2ac127e24c206bda9a99\" data-format=\"png\" data-width=\"3840\" data-height=\"2160\" data-lightbox=\"false\" data-alt=\"a graphic showing the deep research option on ChatGPT\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"a graphic showing the deep research option on ChatGPT\" data-link-reference data-link-target data-syndicationrights=\"false\" data-imagerights=\"other-license\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"a graphic showing the deep research option on ChatGPT\" data-chomp-id=\"add5df8778bd2ac127e24c206bda9a99\" data-format=\"png\" data-height=\"2160\" data-alt=\"a graphic showing the deep research option on ChatGPT\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/add5df8778bd2ac127e24c206bda9a99.jpg\"><\/picture><\/div>\n<p><figcaption>OpenAI\u2019s deep research in ChatGPT.<\/figcaption><figcaption>Image: OpenAI<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"add5df8778bd2ac127e24c206bda9a99\" data-recommend-id=\"image:\/\/add5df8778bd2ac127e24c206bda9a99\" data-format=\"png\" data-width=\"3840\" data-height=\"2160\" data-lightbox=\"false\" data-alt=\"a graphic showing the deep research option on ChatGPT\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>Each week, Quartz rounds up product launches, updates, and funding news from artificial intelligence-focused startups and companies.<\/p>\n<p>Here\u2019s what\u2019s going on this week in the ever-evolving AI industry.<\/p>\n<\/div>\n<\/div>\n<div>\n<figure id data-id=\"9464e277ff80700daa3d7c0bf21439c3\" data-recommend-id=\"image:\/\/9464e277ff80700daa3d7c0bf21439c3\" data-format=\"png\" data-width=\"3841\" data-height=\"2160\" data-lightbox=\"false\" data-alt=\"image of deep research sidebar and inquiry in ChatGPT\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"image of deep research sidebar and inquiry in ChatGPT\" data-link-reference data-link-target data-syndicationrights=\"false\" data-imagerights=\"other-license\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"image of deep research sidebar and inquiry in ChatGPT\" data-chomp-id=\"9464e277ff80700daa3d7c0bf21439c3\" data-format=\"png\" data-height=\"2160\" data-alt=\"image of deep research sidebar and inquiry in ChatGPT\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/9464e277ff80700daa3d7c0bf21439c3.jpg\"><\/picture><\/div>\n<p><figcaption>OpenAI\u2019s deep research in ChatGPT.<\/figcaption><figcaption>Image: OpenAI<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"9464e277ff80700daa3d7c0bf21439c3\" data-recommend-id=\"image:\/\/9464e277ff80700daa3d7c0bf21439c3\" data-format=\"png\" data-width=\"3841\" data-height=\"2160\" data-lightbox=\"false\" data-alt=\"image of deep research sidebar and inquiry in ChatGPT\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>OpenAI launched its <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/openai.com\/index\/introducing-deep-research\/\",{\"metric25\":1}]]\" href=\"https:\/\/openai.com\/index\/introducing-deep-research\/\" target=\"_blank\" rel=\"noopener noreferrer\">Deep Research<\/a><\/span> AI agent this week that can \u201csynthesize large amounts of online information\u201d with its reasoning capabilities, as well as complete multi-step research. Deep Research is powered by a version of OpenAI\u2019s upcoming o3 reasoning model. <\/p>\n<p>\u201cIt accomplishes in tens of minutes what would take a human many hours,\u201d OpenAI said about the agent, which is available through ChatGPT.<\/p>\n<p>After a user gives the agent a prompt, it works independently to find, analyze, and synthesize information on the internet to generate \u201ca comprehensive report at the level of a research analyst.\u201d Deep Research can analyze text, images, and PDFs.<\/p>\n<\/div>\n<\/div>\n<div>\n<figure id data-id=\"96545fb5235ebca45503df3a62b0c64a\" data-recommend-id=\"image:\/\/96545fb5235ebca45503df3a62b0c64a\" data-format=\"jpg\" data-width=\"5472\" data-height=\"3648\" data-lightbox=\"false\" data-alt=\"exterior of ByteDance office, ByteDance logo and signage on a grey block wall\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"exterior of ByteDance office, ByteDance logo and signage on a grey block wall\" data-link-reference data-link-target data-syndicationrights=\"true\" data-imagerights=\"getty\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"exterior of ByteDance office, ByteDance logo and signage on a grey block wall\" data-chomp-id=\"96545fb5235ebca45503df3a62b0c64a\" data-format=\"jpg\" data-height=\"3648\" data-alt=\"exterior of ByteDance office, ByteDance logo and signage on a grey block wall\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/96545fb5235ebca45503df3a62b0c64a.jpg\"><\/picture><\/div>\n<p><figcaption>ByteDance office in Beijing, China on August 4, 2020.<\/figcaption><figcaption>Photo: Emmanuel Wong (Getty Images)<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"96545fb5235ebca45503df3a62b0c64a\" data-recommend-id=\"image:\/\/96545fb5235ebca45503df3a62b0c64a\" data-format=\"jpg\" data-width=\"5472\" data-height=\"3648\" data-lightbox=\"false\" data-alt=\"exterior of ByteDance office, ByteDance logo and signage on a grey block wall\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>ByteDance unveiled an AI video generator this week called <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/omnihuman-1.com\/\",{\"metric25\":1}]]\" href=\"https:\/\/omnihuman-1.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">OmniHuman-1<\/a><\/span>, which can generate realistic videos of humans using a single image and a motion signal, such as audio or video. OmniHuman-1 is a multimodal model, meaning it uses different inputs to generate the videos.<\/p>\n<p>\u201cWhether it\u2019s a portrait, half-body shot, or full-body image, OmniHuman handles it all with lifelike movements, natural gestures, and stunning attention to detail,\u201d ByteDance said.<\/p>\n<p>OmniHuman-1 is still in the research phase and not yet available to the public.<\/p>\n<\/div>\n<\/div>\n<div>\n<figure id data-id=\"873b1c4d1de58f952205b75eecaa3e80\" data-recommend-id=\"image:\/\/873b1c4d1de58f952205b75eecaa3e80\" data-format=\"jpg\" data-width=\"5000\" data-height=\"3333\" data-lightbox=\"false\" data-alt=\"close-up of lyft logo on a white pill-shaped background on a car window\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"close-up of lyft logo on a white pill-shaped background on a car window\" data-link-reference data-link-target data-syndicationrights=\"true\" data-imagerights=\"getty\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"close-up of lyft logo on a white pill-shaped background on a car window\" data-chomp-id=\"873b1c4d1de58f952205b75eecaa3e80\" data-format=\"jpg\" data-height=\"3333\" data-alt=\"close-up of lyft logo on a white pill-shaped background on a car window\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/873b1c4d1de58f952205b75eecaa3e80.jpg\"><\/picture><\/div>\n<p><figcaption>Lyft logo on a vehicle in Daly City, California on November 3, 2017.<\/figcaption><figcaption>Photo: Smith Collection\/Gado (Getty Images)<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"873b1c4d1de58f952205b75eecaa3e80\" data-recommend-id=\"image:\/\/873b1c4d1de58f952205b75eecaa3e80\" data-format=\"jpg\" data-width=\"5000\" data-height=\"3333\" data-lightbox=\"false\" data-alt=\"close-up of lyft logo on a white pill-shaped background on a car window\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>Rideshare service Lyft (<span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/qz.com\/quote\/LYFT\",{\"metric25\":1}]]\" href=\"https:\/\/qz.com\/quote\/LYFT\" target=\"_blank\" rel=\"noopener noreferrer\">LYFT<\/a><\/span>) and AI startup Anthropic announced a partnership this week to create AI-powered products for Lyft customers.<\/p>\n<p>\u201cThis is to enhance the rideshare experience for its community of more than 40 million annual riders and over 1 million drivers,\u201d Anthropic said in a <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/www.anthropic.com\/news\/lyft-announcement\",{\"metric25\":1}]]\" href=\"https:\/\/www.anthropic.com\/news\/lyft-announcement\" target=\"_blank\" rel=\"noopener noreferrer\">statement<\/a><\/span>. <\/p>\n<p>Lyft has already deployed Anthropic\u2019s Claude, via Amazon Bedrock (<span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/qz.com\/quote\/AMZN\",{\"metric25\":1}]]\" href=\"https:\/\/qz.com\/quote\/AMZN\" target=\"_blank\" rel=\"noopener noreferrer\">AMZN<\/a><\/span>), in its customer care AI assistant to respond to support issues. Since the Claude integration, Lyft <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/www.lyft.com\/blog\/posts\/lyft-and-anthropic-team-up-to-redefine-customer-obsessed-ai\",{\"metric25\":1}]]\" href=\"https:\/\/www.lyft.com\/blog\/posts\/lyft-and-anthropic-team-up-to-redefine-customer-obsessed-ai\" target=\"_blank\" rel=\"noopener noreferrer\">said<\/a><\/span> its AI assistant has \u201creduced the average customer service resolution time by 87%.\u201d<\/p>\n<p>Additionally, Lyft will get early access to Anthropic\u2019s AI models and technology for research testing, and its engineering organization will be trained by the AI startup.<\/p>\n<\/div>\n<\/div>\n<div>\n<figure id data-id=\"608f3b10942aa354458c0455a71b2248\" data-recommend-id=\"image:\/\/608f3b10942aa354458c0455a71b2248\" data-format=\"jpg\" data-width=\"5000\" data-height=\"3333\" data-lightbox=\"false\" data-alt=\"meta logo on a white wall\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"meta logo on a white wall\" data-link-reference data-link-target data-syndicationrights=\"true\" data-imagerights=\"getty\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"meta logo on a white wall\" data-chomp-id=\"608f3b10942aa354458c0455a71b2248\" data-format=\"jpg\" data-height=\"3333\" data-alt=\"meta logo on a white wall\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/608f3b10942aa354458c0455a71b2248.jpg\"><\/picture><\/div>\n<p><figcaption>Meta logo in Davos, Switzerland on January 18, 2024.<\/figcaption><figcaption>Photo: FABRICE COFFRINI\/AFP (Getty Images)<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"608f3b10942aa354458c0455a71b2248\" data-recommend-id=\"image:\/\/608f3b10942aa354458c0455a71b2248\" data-format=\"jpg\" data-width=\"5000\" data-height=\"3333\" data-lightbox=\"false\" data-alt=\"meta logo on a white wall\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>Meta (<span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/qz.com\/quote\/META\",{\"metric25\":1}]]\" href=\"https:\/\/qz.com\/quote\/META\" target=\"_blank\" rel=\"noopener noreferrer\">META<\/a><\/span>) shared its <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/openai.com\/index\/introducing-deep-research\/\",{\"metric25\":1}]]\" href=\"https:\/\/openai.com\/index\/introducing-deep-research\/\" target=\"_blank\" rel=\"noopener noreferrer\">Frontier AI Framework<\/a><\/span> this week, which outlines how it determines risk when deciding to release an AI model. The framework comes after its commitment last year at the global AI Seoul Summit. <\/p>\n<p>The framework is focused \u201con the most critical risks in the areas of cybersecurity threats and risks from chemical and biological weapons,\u201d Meta said. That way, Meta said it \u201ccan work to protect national security while promoting innovation.\u201d<\/p>\n<p>Some of the areas that the framework focuses on are: identifying potential catastrophic outcomes it can prevent, modeling how bad actors can misuse frontier AI, and defining thresholds for risk based on its threat modeling exercises.<\/p>\n<\/div>\n<\/div>\n<div>\n<figure id data-id=\"99f1121fc7db57cbeb6e006aa74f3d99\" data-recommend-id=\"image:\/\/99f1121fc7db57cbeb6e006aa74f3d99\" data-format=\"jpg\" data-width=\"1968\" data-height=\"946\" data-lightbox=\"false\" data-alt=\"aerial view of a large group of people\" data-recommended=\"false\" data-hide=\"false\" contenteditable=\"false\" draggable=\"false\">\n<div contenteditable=\"false\" data-alt=\"aerial view of a large group of people\" data-link-reference data-link-target data-syndicationrights=\"false\" data-imagerights=\"other-license\" data-hide=\"false\" data-hidecredit=\"false\">\n<div><picture><source media=\"(max-width: 49.94em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 50em) and (max-width: 63.69em)\" type=\"image\/jpeg\" ><source media=\"(min-width: 63.75em)\" type=\"image\/jpeg\" ><img decoding=\"async\" alt=\"aerial view of a large group of people\" data-chomp-id=\"99f1121fc7db57cbeb6e006aa74f3d99\" data-format=\"jpg\" data-height=\"946\" data-alt=\"aerial view of a large group of people\" data-anim-src src=\"https:\/\/i.kinja-img.com\/image\/upload\/c_fit,q_60,w_645\/99f1121fc7db57cbeb6e006aa74f3d99.jpg\"><\/picture><\/div>\n<p><figcaption>StackAdapt team.<\/figcaption><figcaption>Photo: StackAdapt<\/figcaption><\/p>\n<\/div>\n<p><span data-id=\"99f1121fc7db57cbeb6e006aa74f3d99\" data-recommend-id=\"image:\/\/99f1121fc7db57cbeb6e006aa74f3d99\" data-format=\"jpg\" data-width=\"1968\" data-height=\"946\" data-lightbox=\"false\" data-alt=\"aerial view of a large group of people\" data-recommended=\"false\" data-hide=\"false\"><\/span><\/figure>\n<div>\n<p>Programmatic advertising platform, StackAdapt, announced a $235 million growth capital raise this week. The funding was led by Teachers\u2019 Venture Growth, with participation from five other investors, including Intrepid Growth Partners.<\/p>\n<p>The Canadian company uses AI and automation to provide advertising and marketing technology. The investment will go toward scaling its research and development, and expanding globally.<\/p>\n<p>\u201cThe challenges marketing teams face are vast and evolving rapidly,\u201d Vitaly Pecherskiy, co-founder and CEO of StackAdapt, said in a <span><a data-ga=\"[[\"Embedded Url\",\"External link\",\"https:\/\/www.businesswire.com\/news\/home\/20250203806442\/en\/StackAdapt-Secures-235M-USD-Investment-Led-by-Teachers%E2%80%99-Venture-Growth\",{\"metric25\":1}]]\" href=\"https:\/\/www.businesswire.com\/news\/home\/20250203806442\/en\/StackAdapt-Secures-235M-USD-Investment-Led-by-Teachers%E2%80%99-Venture-Growth\" target=\"_blank\" rel=\"noopener noreferrer\">statement<\/a><\/span>. \u201cMuch of the pressure to drive growth rests on their shoulders as they work to reinvent operations and discover new ways to reach customers effectively, profitably, and predictably. To help them stay ahead of the curve, we are relentlessly focused on building the most advanced, intelligent, and automated platform to make their success inevitable.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/qz.com\/openai-deep-research-ai-agent-bytedance-video-meta-lyft-1851758117\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><br \/>\n Britney Nguyen<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI\u2019s deep research in ChatGPT. Image: OpenAI Each week, Quartz rounds up product launches, updates, and funding news from artificial intelligence-focused startups and companies. Here\u2019s what\u2019s going on this week in the ever-evolving AI industry. OpenAI\u2019s deep research in ChatGPT. Image: OpenAI OpenAI launched its Deep Research AI agent this week that can \u201csynthesize large<\/p>\n","protected":false},"author":1,"featured_media":825557,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[77235,3801],"tags":[],"class_list":{"0":"post-825556","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-openais","8":"category-research"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/825556","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=825556"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/825556\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/825557"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=825556"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=825556"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=825556"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}