Yu-Ting Feng

View Original

Gen: 48 AI Shorts Competition

This article is written in collaboration with AI. English version below. 

AI影片創作:無限可能與有限想像的碰撞

前兩週,我參加了Runway ML舉辦的48小時AI影片拍攝大賽。巧合的是,那天去CNEX時,正好遇上一場AI講座。這些經歷讓我想整理一下最近的一些觀察和思考。

無限的能力,有限的想像力

其實從Runway推出第一代產品時我就開始使用了。最近在VFX圈子裡,討論Runway的聲音越來越多。但說實話,大多數人還是把它用在Image to Video的簡單動態轉換上。可能是因為Credit數量有限,大家在使用上都比較保守。

看到這次比賽,我心想:也許可以趁這個機會好好玩一下。畢竟比賽提供的Credit基本上就是讓我們放開手腳去創作。於是,我找來了攝影師阿湯和動態設計師美樂組隊參加。我們三個人利用台灣的時差優勢,在48小時內瘋狂工作。

因為這是一次有趣的實驗,我們在過程中盡可能地使用AI工具。

概念發想:從生命到AI

這幾年,身邊很多朋友都生了小孩,其中不乏使用人工生殖技術的。我一直覺得生育,特別是通過人工生殖,是一個極其艱辛的過程。於是,我們決定製作一個短片,講述從期待到失望的心路歷程。

至於為什麼選擇冰島作為背景,說來有點巧合。我們的攝影師阿湯恰好有很多冰島的素材,而Runway在處理既有圖片上的效果特別出色。

創作流程:AI輔助下的48小時馬拉松

第一天:構思與視覺化

Runway公布題目後,我們先用他們的預設選項跑了幾個想法,然後反覆討論,逐漸確定方向。確定了大概的故事線後,我們用ChatGPT寫出分場大綱和場景描述。

接下來,我們把這些場景描述丟進Midjourney,生成我們認為合適的場景圖像。作為藝術總監的美樂在這個階段定調了整體的視覺方向。

然後就開始了瘋狂的循環:Midjourney → Runway → Midjourney,不斷調整直到找到理想的視覺效果。

第二天:剪輯與後期

當大部分場景都完成後,我開始把檔案導入Premiere進行剪輯。這個階段其實和傳統流程差不多。值得一提的是,Runway與Epic Sound有合作,Premiere裡也有相應的插件,可以很快速地處理音頻文件。

最後,我們把作品送進達文西進行後期調色和輸出。

反思:AI創作的未來

現在,Runway的網站上已經公布了比賽結果。老實說,每一部作品都很值得一看。以前,我可能會把這些作品僅僅當作參考樣本。但最近我的想法有了變化:如果要把它們作為最終產品,那麼它們代表的就是一種風格的選擇。

AI在紀錄片中的應用:匿名性的新可能

在CNEX的講座上,我注意到了AI的一個特殊應用。在BBC製作的一部關於酒精成癮的紀錄片中,他們使用deepfake技術替換了人臉,取代了傳統的馬賽克處理方式。這不僅保護了受訪者的隱私,還能更好地重現場景氛圍。

那天的講座主要討論的是,在與AI不可避免地合作的前提下,如何制定規則,避免像社交網絡的劍橋分析事件那樣的問題再次發生。目前,很多大型平台已經開始著手處理這類問題了。


我最近常想,在星際大戰的年代是想得到拍不到,在現在這個年代,是拍得到想不到,科技的進程本身就是勞力解放的過程,如果B+腳本已經有人寫了,那怎樣的A+ 腳本是我們要追求的?


BTS of the project



AI Video Creation: The Collision of Infinite Possibilities and Limited Imagination

A couple of weeks ago, I participated in a 48-hour AI video creation contest organized by Runway ML. Coincidentally, on the same day I visited CNEX, I stumbled upon an AI lecture. These experiences made me want to organize some of my recent observations and thoughts.

Infinite Capabilities, Limited Imagination

To be honest, I've been using Runway since they launched their first generation. Lately, there's been more and more buzz about it in the VFX circles. But truth be told, most people are still just using it for simple Image to Video dynamic conversions. Maybe it's because of the limited Credits, so everyone's being a bit conservative in their use.

When I saw this contest, I thought: hey, maybe this is a chance to really play around with it. After all, the Credits provided for the competition basically allowed us to go all out. So, I teamed up with the photographer Ah-Tang and motion designer Mei-Le. The three of us took advantage of Taiwan's time difference and worked like crazy for 48 hours. (If you're curious about how we worked, check out the video link at the end)

Since this was a fun experiment, we tried to use AI tools as much as possible throughout the process.

Concept Development: From Life to AI

In recent years, many of my friends have had children, some even through assisted reproductive technology. I've always felt that childbirth, especially through artificial insemination, is an incredibly challenging process. So, we decided to create a short film about the emotional journey from expectation to disappointment.

As for why we chose Iceland as the backdrop, it was a bit of a coincidence. Our photographer, Ah-Tang, happened to have a lot of Iceland footage, and Runway excels at processing existing images.

Creative Process: A 48-Hour Marathon Aided by AI

Day One: Conceptualization and Visualization

After Runway announced the theme, we first ran a few ideas using their preset options, then discussed back and forth, gradually nailing down our direction. Once we had a rough storyline, we used ChatGPT to write out scene outlines and descriptions.

Next, we fed these scene descriptions into Midjourney to generate images we felt were suitable. Mei-Le, our art director, set the overall visual direction at this stage.

Then began the crazy cycle: Midjourney → Runway → Midjourney, constantly adjusting until we found the ideal visual effect.

Day Two: Editing and Post-Production

When most of the scenes were complete, I started importing the files into Premiere for editing. This stage was actually quite similar to the traditional workflow. It's worth mentioning that Runway has a collaboration with Epic Sound, and there are corresponding plugins in Premiere that allow for quick audio file processing.

Finally, we sent our work to DaVinci for color grading and output.

Reflection: The Future of AI Creation

Now, Runway's website has announced the contest results. Honestly, every piece is worth watching. In the past, I might have seen these works merely as reference samples. But recently, my perspective has shifted: if we're to consider them as final products, they represent a choice of style.

AI Application in Documentaries: New Possibilities for Anonymity

At the CNEX lecture, I noticed a special application of AI. In a BBC documentary about alcohol addiction, they used AI technology to replace faces, instead of the traditional mosaic processing. This not only protected the interviewees' privacy but also better recreated the scene atmosphere.

The lecture that day mainly discussed how to establish rules for inevitable collaboration with AI, to prevent issues like the Cambridge Analytica incident on social networks from happening again. Currently, many major platforms have already started addressing these kinds of problems.


Lately, I've been thinking that in the era of Star Wars, many ideas could be imagined but not filmed, while in today's era, many things can be filmed but we can't even imagine them. The progress of technology is a process of liberating labor. If someone has already written a B-level script, what kind of A-level script should we be striving for?