{"id":862368,"date":"2025-07-14T02:13:31","date_gmt":"2025-07-14T07:13:31","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2025\/07\/14\/i-used-veo-3-to-recreate-the-first-youtube-video-and-the-results-are-almost-too-good\/"},"modified":"2025-07-14T02:13:31","modified_gmt":"2025-07-14T07:13:31","slug":"i-used-veo-3-to-recreate-the-first-youtube-video-and-the-results-are-almost-too-good","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2025\/07\/14\/i-used-veo-3-to-recreate-the-first-youtube-video-and-the-results-are-almost-too-good\/","title":{"rendered":"I used Veo 3 to recreate the first YouTube video, and the results are almost too good"},"content":{"rendered":"<div data-widget-type=\"contentparsed\" id=\"content\">\n<section>\n<div>\n<div>\n<picture data-new-v2-image=\"true\"><source type=\"image\/webp\"  ><img decoding=\"async\" alt=\"Combo image of first YouTube video and an AI recreation image grab\"   data-new-v2-image=\"true\" src=\"https:\/\/cdn.mos.cms.futurecdn.net\/QyJpZiEZtTttfoaoYWWdLV.jpg\" data-pin-media=\"https:\/\/cdn.mos.cms.futurecdn.net\/QyJpZiEZtTttfoaoYWWdLV.jpg\" data-pin-nopin=\"true\" fetchpriority=\"high\">\n<\/picture>\n<\/div><figcaption>\n<span>(Image credit: Future)<\/span><br \/>\n<\/figcaption><\/div>\n<div id=\"article-body\">\n<p>We all know the story of the <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/computing\/social-media\/youtubes-20th-anniversary-i-hope-you-know-you-are-the-way-you-are-because-of-youtube\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/social-media\/youtubes-20th-anniversary-i-hope-you-know-you-are-the-way-you-are-because-of-youtube\">first YouTube video<\/a>, a grainy 19-second clip of co-founder Jawed Karim at the zoo, remarking on the elephants behind him. That video was a pivotal moment in the digital space, and in some ways, it is a reflection, or at least an inverted mirror image, of today as we digest the arrival of Veo 3.<\/p>\n<p>Part of Google Gemini, <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/googles-veo-3-marks-the-end-of-ai-videos-silent-era\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/googles-veo-3-marks-the-end-of-ai-videos-silent-era\">Veo 3<\/a> was unveiled at <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/news\/live\/google-i-o-2025-live-project-astra-gemini-and-more\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/news\/live\/google-i-o-2025-live-project-astra-gemini-and-more\">Google I\/O 2025<\/a> and is the first generative video platform that can, with a single prompt, generate a video with synced dialogue, sound effects, and background noises. Most of these 8-second clips arrive in under 5 minutes after you enter the prompt.<\/p>\n<p>I&#8217;ve been <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/i-just-used-veo-3-to-create-a-wild-ai-video-and-its-easier-than-you-think\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/i-just-used-veo-3-to-create-a-wild-ai-video-and-its-easier-than-you-think\">playing with Veo 3 for a couple of days,<\/a> and for my latest challenge, I tried to go back to the beginning of social video and that <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/tag\/youtube\" data-auto-tag-linker=\"true\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/tag\/youtube\">YouTube<\/a> <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.youtube.com\/watch?v=jNQXAC9IVRw\" target=\"_blank\" data-url=\"https:\/\/www.youtube.com\/watch?v=jNQXAC9IVRw\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\">&#8220;Me at the Zoo&#8221;<\/a> moment. Specifically, I wondered if Veo 3 could recreate that video.<\/p>\n<p>As I&#8217;ve written, the key to a good Veo 3 outcome is the prompt. Without detail and structure, Veo 3 tends to make the choices for you, and you usually don&#8217;t end up with what you want. For this experiment, I wondered how I could possibly describe all the details I wanted to derive from that short video and deliver them to Veo 3 in the form of a prompt. So, naturally, I turned to another AI.<\/p>\n<p>Google Gemini 2.5 Pro is not currently capable of analyzing a URL, but <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/ive-tried-googles-new-ai-mode-and-now-you-can-too-here-are-3-tips-for-getting-more-from-googles-new-free-ai-search-tool\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/ive-tried-googles-new-ai-mode-and-now-you-can-too-here-are-3-tips-for-getting-more-from-googles-new-free-ai-search-tool\">Google AI Mode<\/a>, the brand-new form of search that is quickly spreading across the US, is.<\/p>\n<p>Here&#8217;s the prompt I dropped into Google&#8217;s AI Mode:<\/p>\n<figure data-bordeaux-image-check>\n<div>\n<p><picture><source type=\"image\/webp\"  ><img decoding=\"async\" alt=\"AI Mode URL analysis\"   loading=\"lazy\" src=\"https:\/\/cdn.mos.cms.futurecdn.net\/YdA57CC4QZRtdptN6wEVxE.png\" data-pin-media=\"https:\/\/cdn.mos.cms.futurecdn.net\/YdA57CC4QZRtdptN6wEVxE.png\"><\/picture><\/p>\n<\/div><figcaption itemprop=\"caption description\"><span itemprop=\"copyrightHolder\">(Image credit: Future)<\/span><\/figcaption><\/figure>\n<p>Google AI Mode almost instantly returned with a detailed description, which I took and dropped into the Gemini Veo 3 prompt field.<\/p>\n<div data-hydrate=\"true\" id=\"slice-container-newsletterForm-articleInbodyContent-fzDEiSLpcG6RJQHcELA7mV\">\n<section>\n<p>Sign up for breaking news, reviews, opinion, top tech deals, and more.<\/p>\n<\/section>\n<\/div>\n<p>I did do some editing, mostly removing phrases like &#8220;The video appears&#8230;&#8221; and the final analysis at the end, but otherwise, I left most of it and added this at the top of the prompt:<\/p>\n<p><em>&#8220;Let&#8217;s make a video based on these details. The output should be 4:3 ratio and look like it was shot on 8MM videotape.&#8221;<\/em><\/p>\n<p>It took a while for Veo 3 to generate the video (I think the service is getting hammered right now), and, because it only creates 8-second chunks at a time, it was incomplete, cutting off the dialogue mid-sentence.<\/p>\n<p>Still, the result is impressive. I wouldn&#8217;t say that the main character looks anything like Karim. To be fair, the prompt doesn&#8217;t describe, for instance, Karim&#8217;s haircut, the shape of his face, or his deep-set eyes. Google&#8217;s AI Mode&#8217;s description of his outfit was also probably insufficient. I&#8217;m sure it would have done a better job if I had fed it a screenshot of the original video.<\/p>\n<p>Note to self: You can never offer enough detail in a generative prompt.<\/p>\n<h2 id=\"8-seconds-at-a-time-3\">8 seconds at a time<\/h2>\n<p>The Veo 3 video zoo is nicer than the one Karim visited, and the elephants are much further away, though they are in motion back there.<\/p>\n<p>Veo 3 got the film quality right, giving it a nice 2005 look, but not the 4:3 aspect ratio. It also added archaic and unnecessary labels at the top that thankfully disappear quickly. I realize now I should have removed the &#8220;Title&#8221; bit from my prompt.<\/p>\n<p>The audio is particularly good. Dialogue syncs well with my main character and, if you listen closely, you&#8217;ll hear the background noises, as well.<\/p>\n<p>The biggest issue is that this was only half of the brief YouTube video. I wanted a full recreation, so I decided to go back in with a much shorter prompt:<\/p>\n<p><em>Continue with the same video and add him looking back at the elephants and then looking at the camera as he&#8217;s saying this dialogue: <\/em><\/p>\n<p><em>&#8220;fronts and that&#8217;s that&#8217;s cool.&#8221; &#8220;And that&#8217;s pretty much all there is to say.&#8221;<\/em><\/p>\n<p>Veo 3 complied with the setting and main character but lost some of the plot, dropping the old-school grainy video of the first generated clip. This means that when I present them together (as I do above), we lose considerable continuity. It&#8217;s like a film crew time jump, where they suddenly got a much better camera.<\/p>\n<p>I&#8217;m also a bit frustrated that all my Veo 3 videos have nonsensical captions. I need to remember to ask Veo 3 to remove, hide, or put them outside the video frame.<\/p>\n<p>I think about how hard it probably was for Karim to film, edit, and upload that first short video and how I just made essentially the same clip without the need for people, lighting, microphones, cameras, or elephants. I didn&#8217;t have to transfer footage from tape or even from an iPhone. I just conjured it out of an algorithm. We have truly stepped through the looking glass, my friends.<\/p>\n<p>I did learn one other thing through this project. As a Google AI Pro member, I have two Veo 3 video generations <em>per day<\/em>. That means I can do this again tomorrow. Let me know in the comments what you&#8217;d like me to create.<\/p>\n<h3 id=\"section-you-might-also-like\"><span>You might also like<\/span><\/h3>\n<ul>\n<li><a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/i-just-used-veo-3-to-create-a-wild-ai-video-and-its-easier-than-you-think\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/i-just-used-veo-3-to-create-a-wild-ai-video-and-its-easier-than-you-think\">I created these wild AI videos in Veo 3 and here&#8217;s how you can do it too<\/a><\/li>\n<li><a href=\"https:\/\/www.techradar.com\/news\/live\/google-i-o-2025-live-project-astra-gemini-and-more\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/news\/live\/google-i-o-2025-live-project-astra-gemini-and-more\">Google I\/O 2025 as it happened: AI Search, Veo, Flow &#8230;<\/a><\/li>\n<li><a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/the-13-biggest-announcements-from-google-i-o-2025\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/the-13-biggest-announcements-from-google-i-o-2025\">The 13 biggest announcements from Google I\/O 2025<\/a><\/li>\n<li><a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/googles-veo-3-marks-the-end-of-ai-videos-silent-era\" data-before-rewrite-localise=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/googles-veo-3-marks-the-end-of-ai-videos-silent-era\">Google&#8217;s Veo 3 marks the end of AI video&#8217;s &#8216;silent era&#8217;<\/a><\/li>\n<\/ul>\n<\/div>\n<div id=\"slice-container-authorBio-fzDEiSLpcG6RJQHcELA7mV\">\n<p>A 38-year industry veteran and <a href=\"https:\/\/cdn.mos.cms.futurecdn.net\/ox35RKH2kNKBfSBfvHEoK6.jpg\">award-winning journalist<\/a>, Lance has covered technology since PCs were the size of suitcases and \u201con line\u201d meant \u201cwaiting.\u201d He\u2019s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Lance_Ulanoff\" target=\"_blank\">Lance Ulanoff<\/a> makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the <a href=\"https:\/\/www.today.com\/video\/google-glass-is-beginning-of-a-revolution-44496451646\" target=\"_blank\">Today Show<\/a>, Good Morning America, CNBC, CNN, and the BBC.\u00a0<\/p>\n<\/div>\n<\/section>\n<\/div>\n<p><a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/i-used-veo-3-to-recreate-the-first-youtube-video-and-the-results-are-almost-too-good\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>(Image credit: Future) We all know the story of the first YouTube video, a grainy 19-second clip of co-founder Jawed Karim at the zoo, remarking on the elephants behind him. That video was a pivotal moment in the digital space, and in some ways, it is a reflection, or at least an inverted mirror image, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":862369,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[673,32331,104640],"tags":[],"class_list":{"0":"post-862368","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-first","8":"category-recreate","9":"category-youtube-videos"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/862368","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=862368"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/862368\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/862369"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=862368"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=862368"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=862368"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}