Return to site

Open Source Video Production Software

broken image


Top Free and Open Source Video Editors 1. Audiometer, Histogram, Waveform, etc. Platforms available on: Linux, macOS and Windows. Kdenlive is an open. Compatible with various effects frameworks: projectM, LADSPA audio, and so on. Platforms available on: Linux. Generally speaking YouTubers tend to go for simple, effective and affordable apps. One of the best free apps is Lightworks, which you can get for Windows, Mac and Linux devices. It has integrated. Mac users who only need a simple video editor will find that Apple iMovie is the best free video editing software, because of its strong integration with Apple's operating system and easy-to-grasp. Open source is a source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. It most commonly refers to the open-source model, in which open-source software or other products are released under an open-source license as part of the open-source-software movement.

  1. Open Source Video Production Software
  2. Best Open Source Video Editors
  3. Open Source Video Download Software
So you've just finished your Ultra-Mega-Marathon Streaming session.. but now you need to cut out those not-so-funny-parts, the false starts, all those flubs, add some lower thirds, clean up/normalize your audio levels, etc.
In other words you want to take it up a notch. So you think 'Hey I'm gonna edit/clean up/layer on some nice visuals for my videos', but you don't want to pay $800 for Final Cut Pro+Logic+Motion, or $50/month to Adobe to use Audition/Premiere/After Effects.
OBS is free (as in beer and speech) software, so aren't there other tools out there that are that are also free (maybe even open source)?
Yes!
And here is a list of them just for you:
1. Video Editors are here (as their namesake) mostly to edit videos. Some can also be used as compositors and/or audio post-production work, but their main focus is on helping you take your video and add/remove/layer parts of your video and audio to help create the final file.
  1. Mac/Windows/Linux - AVI Demux: http://fixounet.free.fr/avidemux/ A simple editor that seems to be able to load almost any format, and provides a simple (but fast and efficient) cuts-only, single-track editor. Good if you want to just remove content from a file quickly and simply.
  2. Mac/Windows/Linux - OpenShot (2.1+): http://www.openshot.org/ A multi-format editor with some neat features. Open Source and multi-platform, but unstable (i.e. crashes) a lot. I would recommend using something else until they fix their stability issues.
  3. Linux/Mac/Windows - Blender (from the Blender Foundation): https://www.blender.org/This program started off it's life as a 3D Graphics program, but has expanded in recent years to include decent/basic video editing and compositing tools as well. Also open source.
  4. Mac/Windows/Linux - LightWorks (from EditShare): https://www.lwks.com/A world-class editor available for Linux, OS X and Windows. Free licenses available that lets you edit up to HD (1920x1080) material, and can export up to 720p directly to Youtube, and 1080p to Vimeo. Excellent free training material from the creators that not only teach you the program, but also some of the 'Art of Editing.'
  5. Mac/Windows/Linux - DaVinci Resolve (from Blackmagic Design): https://www.blackmagicdesign.com/ A program that started out as a color-corrector/grading tool, but in the past several releases has gotten a ton of video editing features. If you like world-class color grading, color correction, masking, and decent to excellent video and audio editing, this one might be good for you. (Also now includes Fusion integrated for even faster work/turnaround).
  6. Linux, MacOS and Windows - KDEnlive:https://kdenlive.org/ . I haven't used this editor, but people say good things about it. On Linux check your repos for it (for everyone else grab the installer from their website), install it and give it a whirl.
  7. Windows/Mac/Linux - Shortcut:https://www.shotcutapp.com/ . EBrito recommends checking this one out. Make sure to follow the tutorials on their website.
  8. Windows/Mac - Avid Media Composer First:http://www.avid.com If you want to go pro, then this is it. What Hollywood, the big studios and a lot of independents use. Even though it's free (as in beer) it still has more than enough features to satisfy 99%+ of video editor needs. Loads of video tutorials are available on Youtube as well.

2. Audio Editors/Digital Audio Workstations (DAW) are very helpful in getting your audio to a good quality level, so that people won't turn you off the moment they hear you speak. Remember that most people will watch video that sucks in quality, but most people won't listen to bad/too quiet/distorted audio for very long. So help get your audio in shape with these programs:
  1. Mac/Windows - Pro Tools First (from Avid Technologies): http://apps.avid.com/ProToolsFirst/ If you like Pro(fessional) Tools with an interface that takes some time to learn (but can take you places), then this is it. Biggest and baddest of the pro-audio software world, for free.
  2. Linux/Mac/Windows - Audacity (from Team Audacity): http://web.audacityteam.org/ Open source, and only actual editor in the list (all the other listed software are DAWs). Used by many, with many tutorials online available to help you get past the learning curve of this program.
  3. Linux/Mac/Windows - Ardour: https://ardour.org/ Open source, and professional, this tool falls in the same league as Pro-Tools and Reaper. Good if you like a professional DAW interface and the learning curve that goes with it. (Can also be used as a JACK source for pre-processing audio to feed to OBS on Linux and MacOS). Version 5 now includes an official Windows release.
  4. Linux/Mac/Windows - LMMS: https://lmms.io/ Open source, free and (IMO) not bad at all. The interface seems to be more geared towards song writing/looping then straight-up editing, but if none of the above catch your fancy, you might want to check this one out.
  5. Some honorable mentions: CuBase LE 9 (for Mac and Windows) and Cakewalk/SONAR by Bandlab (Windows only?). Free (as in beer), but not open source. Professional-class tools with the interfaces (and the learning curve) to go along with them.

3. Compositors/Motion Graphics. Ever seen a fancy opening to a stream or video? Ever seen overlays (static or moving) that grab your attention? Ever wondered 'How'd they do that?' Well compositing software (e.g. After Effects) is usually the tool involved, and below are some (very) powerful programs:
  1. Windows/Mac/Linux - Fusion (from Blackmagic Design): https://www.blackmagicdesign.com/ Hollywood level effects, but with a bit of a learning curve (no more then After Effects though). Once you learn it you'll be blowing by anything done in After Effects. Also has powerful color correction, masking and keying tools. Takes advantage of your GPUs for faster rendering. (Fusion is now integrated with Resolve, so you might be able to take advantage of that integration by using Resolve outright).
  2. Linux/Mac/Windows - Blender (from the Blender Foundation): https://www.blender.org/ Wait, Blender again? Well it's a versatile program that in recent years has added compositing tools to it's arsenal. Also open source.
  3. Linux/Mac/Windows - Natron: http://natron.fr/ A Multi-platform compositor. On the surface appears to be similar to Blackmagic Designs' Fusion. Open source as well.

4. 3D Graphics. 3D is hard, but the stuff you can create with even just a little bit of skill can truly separate your content from everyone else. Even if just used as elements that you composit together, 3D is a very powerful tool in your post-production toolkit.
  1. Linux/Mac/Windows - Blender (from the Blender Foundation): https://www.blender.org/Again? Well this time we're actually addressing what Blender does best and has been doing the longest, 3D.

Remember that no matter what the software, there is a learning curve involved. So use Google to find tutorials, hit the software creator's forums and most importantly be patient with yourself. It can take some time, but you'll be able to put out higher quality work and help differentiate yourself from the masses of streaming/Let's Players/videos out there with just a little bit of Post-Production Tender-Loving-Care.
Reactions:bmiller, HelloNessa94, Mikkis and 3 others
Linux:movie-camera:

April 9th, 2020

With many of us around the globe under shelter in place due to COVID-19video calls have become a lot more common. In particular, ZOOM hascontroversially become very popular. Arguably Zoom's most interesting featureis the 'Virtual Background' support which allows users to replacethe background behind them in their webcam video feed with any image (or video).

I've been using Zoom for a long time at work for Kubernetes open source meetings,usually from my company laptop. With daily 'work from home' I'm now inclined touse my more powerful and ergonomic personal desktop for some of my open source work.

Unfortunately, Zoom's linux client only supports the 'chroma-key' A.K.A. 'green screen'background removal method. This method requires a solid color backdrop, ideallya green screen with uniform lighting.

Since I do not have a green screen I decided to simply implement my own backgroundremoval, which was obviously better than cleaning my apartment or just usingmy laptop all the time. :grin:

It turns out we can actually get pretty decent results with off the shelf, open sourcecomponents and just a little of our own code.

# Reading The Camera

First thing's first: How are we going to get the video feed from our webcam forprocessing?

Since I use Linux on my personal desktop (when not playing PC games) I chose touse the OpenCVpython bindings as I'm already familiar with them and they includeuseful image processing primatives in addition to V4L2 bindings for reading fromwebcams.

Reading a frame from the webcam with python-opencv is very simple:

For better results with my camera before capturing set:

Most video conferencing software seems to cap video to 720p @ 30 FPS or lower,but we won't necessarily read every frame anyhow, this sets an upper limit.

Put the frame capture in a loop and we've got our video feed!

We can save a test frame with just:

Software

And now we can see that our camera works. Success!

# Finding The Background

OK, now that we have a video feed, how do we identify the background so we canreplace it? This is the tricky part …

While Zoom doesn't seem to have commented anywhere about how they implementedthis, the way it behaves makes me suspect that a neural network is involved,it's hard to explain but the results look like one.Additionally, I found an article about Microsoft Teams implementing background blur with a convolutional neural network.

Creating our own network wouldn't be too hard in principle – There are manyarticles and papers on the topic of image segmentation and plenty of opensource libraries and tools, but we need a fairly specialized dataset to getgood results.

Specifically we'd need lots of webcam like images with the idealhuman foreground marked pixel by pixel versus the background.

Building this sort of dataset in prepartion for training a neural net probably wouldbe a lot of work. Thankfully a research team at Google has already done all of this hardwork and open sourced a pre-trained neural network for 'person segmentation'called BodyPix that works pretty well! ❤️

BodyPix is currently only available in TensorFlow.js form, so the easiestway to use it is from the body-pix-node library.

To get faster inference (prediction) in the browser a WebGL backend is preferred, but innode we can use the Tensorflow GPU backend(NOTE: this requires a NVIDIA Graphics Card, which I have).

To make this easier to setup, we'll start by setting up a small containerizedtensorflow-gpu + node environment / project. Using this with nvidia-docker ismuch easier than getting all of the right dependencies setup on your host, itonly requires docker and an up-to-date GPU driver on the host.

Now to serve the results… WARNING: I am not a node expert! This is justmy quick evening hack, bear with me :-)

The following simple script replies to an HTTP POST Apple os 8 download. ed image with a binary mask(an 2d array of binary pixels, where zero pixels are the background).

We can use numpy and requests to convert a frame to a mask from ourpython script with the following method:

Which gives us a result something like:

While I was working on this, I spotted this tweet:

This is definitely the BEST background for video calls. 💯 pic.twitter.com/Urz62Kg32k

— Ashley Willis (McNamara) (@ashleymcnamara) April 2, 2020

Now that we have the foreground / background mask, it will be easy to replacethe background.

After grabbing the awesome 'Virtual Background' picture from that twitter thread andcropping it to a 16:9 ratio image …

… we can do the following:

Which gives us:

The raw mask is clearly not tight enough due to the performance trade-offswe made with our BodyPix parameters but . so far so good!

This background gave me an idea …

# Making It Fun

Now that we have the masking done, what can we do to make it look better?

The first obvious step is to smooth the mask out, with something like:

This can help a bit, but it's pretty minor and just replacing the backgroundis a little boring, since we've hacked this up ourselves we can do anythinginstead of just a basic background removal …

Given that we're using a Star Wars 'virtual background' I decided to createhologram effect to fit in better. This also lets lean into blurring the mask.

First update the post processing to:

Now the edges are blurry which is good, but we need to start building the hologrameffect.

Open Source Video Production Software

Hollywood holograms typically have the following properties:

  • washed out / monocrhomatic color, as if done with a bright laser
  • scan lines or a grid like effect, as if many beams created the image
  • 'ghosting' as if the projection is done in layers or imperfectly reaching the correct distance

We can add these step by step.

First for the blue tint we just need to apply an OpenCV colormap:

Then we can add the scan lines with a halftone-like effect:

Next we can add some ghosting by adding weighted copies of the current effect,shifted along an axis:

Last: We'll want to keep some of the original color, so let's combinethe holo effect with the original frame similar to how we added the ghosting:

A frame with the hologram effect now looks like:

On it's own this looks pretty :shrug:

But combined with our virtual background it looks more like:

There we go! :tada: (I promise it looks cooler with motion / video :upside_down_face:)

# Outputting Video

Now we're just missing one thing … We can't actually use this in a call yet.

Best Open Source Video Editors

Source

And now we can see that our camera works. Success!

# Finding The Background

OK, now that we have a video feed, how do we identify the background so we canreplace it? This is the tricky part …

While Zoom doesn't seem to have commented anywhere about how they implementedthis, the way it behaves makes me suspect that a neural network is involved,it's hard to explain but the results look like one.Additionally, I found an article about Microsoft Teams implementing background blur with a convolutional neural network.

Creating our own network wouldn't be too hard in principle – There are manyarticles and papers on the topic of image segmentation and plenty of opensource libraries and tools, but we need a fairly specialized dataset to getgood results.

Specifically we'd need lots of webcam like images with the idealhuman foreground marked pixel by pixel versus the background.

Building this sort of dataset in prepartion for training a neural net probably wouldbe a lot of work. Thankfully a research team at Google has already done all of this hardwork and open sourced a pre-trained neural network for 'person segmentation'called BodyPix that works pretty well! ❤️

BodyPix is currently only available in TensorFlow.js form, so the easiestway to use it is from the body-pix-node library.

To get faster inference (prediction) in the browser a WebGL backend is preferred, but innode we can use the Tensorflow GPU backend(NOTE: this requires a NVIDIA Graphics Card, which I have).

To make this easier to setup, we'll start by setting up a small containerizedtensorflow-gpu + node environment / project. Using this with nvidia-docker ismuch easier than getting all of the right dependencies setup on your host, itonly requires docker and an up-to-date GPU driver on the host.

Now to serve the results… WARNING: I am not a node expert! This is justmy quick evening hack, bear with me :-)

The following simple script replies to an HTTP POST Apple os 8 download. ed image with a binary mask(an 2d array of binary pixels, where zero pixels are the background).

We can use numpy and requests to convert a frame to a mask from ourpython script with the following method:

Which gives us a result something like:

While I was working on this, I spotted this tweet:

This is definitely the BEST background for video calls. 💯 pic.twitter.com/Urz62Kg32k

— Ashley Willis (McNamara) (@ashleymcnamara) April 2, 2020

Now that we have the foreground / background mask, it will be easy to replacethe background.

After grabbing the awesome 'Virtual Background' picture from that twitter thread andcropping it to a 16:9 ratio image …

… we can do the following:

Which gives us:

The raw mask is clearly not tight enough due to the performance trade-offswe made with our BodyPix parameters but . so far so good!

This background gave me an idea …

# Making It Fun

Now that we have the masking done, what can we do to make it look better?

The first obvious step is to smooth the mask out, with something like:

This can help a bit, but it's pretty minor and just replacing the backgroundis a little boring, since we've hacked this up ourselves we can do anythinginstead of just a basic background removal …

Given that we're using a Star Wars 'virtual background' I decided to createhologram effect to fit in better. This also lets lean into blurring the mask.

First update the post processing to:

Now the edges are blurry which is good, but we need to start building the hologrameffect.

Open Source Video Production Software

Hollywood holograms typically have the following properties:

  • washed out / monocrhomatic color, as if done with a bright laser
  • scan lines or a grid like effect, as if many beams created the image
  • 'ghosting' as if the projection is done in layers or imperfectly reaching the correct distance

We can add these step by step.

First for the blue tint we just need to apply an OpenCV colormap:

Then we can add the scan lines with a halftone-like effect:

Next we can add some ghosting by adding weighted copies of the current effect,shifted along an axis:

Last: We'll want to keep some of the original color, so let's combinethe holo effect with the original frame similar to how we added the ghosting:

A frame with the hologram effect now looks like:

On it's own this looks pretty :shrug:

But combined with our virtual background it looks more like:

There we go! :tada: (I promise it looks cooler with motion / video :upside_down_face:)

# Outputting Video

Now we're just missing one thing … We can't actually use this in a call yet.

Best Open Source Video Editors

To fix that, we're going to use pyfakewebcam and v4l2loopback to create a fake webcam device.

We're also going to actually wire this all up with docker.

First create a requirements.txt with our dependencies:

And then the Dockerfile for the fake camera app:

We're going to need to install v4l2loopback from a shell:

And then configure a fake camera device:

We need the exclusive_caps setting for some apps (chrome, zoom) to work, the labelis just for our convenience when selecting the camera in apps, and the video numberjust makes this /dev/video20 if available, which is unlikely to be already in use.

Now we can update our script to create the fake camera:

We also need to note that pyfakewebcam expects images in RGB (red, green, blue)while our OpenCV operations are in BGR (blue, green, red) channel order.

We can fix this before outputting and then send a frame with:

All together the script looks like:

Now build the images:

And run them like:

Open Source Video Download Software

Now make sure to start this before opening the camera with any apps, andbe sure to select the 'v4l2loopback' / /dev/video20 camera in Zoom etc.

# The Finished Result

Here's a quick clip I recorded of this in action:

Your browser does not support this video.

Look! I'm dialing into the millenium falcon with an open source camera stack!

I'm pretty happy with how this came out. I'll definitely be joining all of my meetings this way in the morning. :grin:





broken image