Isn't it fascinating how a few shapes can communicate so well? For example, "loss" can be expressed using just 7 line segments. Not only that, take any picture -- such as that asgore truck meme:
With just under 500 circles, you can get an extremely good approximation.
Done with Geometrize.
This must have been fascinating for me in 7th grade, because in a week I had converted like 10 pictures to Desmos and sent it to my friends. Two years later, I was thinking about that week and how fun it was, so I decided to do it again with the 67 kid.
Drawing an Image of the 67 Kid
Convertmos, the tool I usually used for this task, was down, so I had to look through the Google results for "convert an image to desmos". Despite the functionality being pretty basic, tools with this purpose were really scarce. They're either monochromatic, inaccurate, or can only work on very grainy images. However after 10 minutes of searching, I found a perfect github repository for the job. The demonstration was so realistic you couldn't distinguish it from the input, it has many arguments to tweak the generation, and it's in code so I could modify it to convert videos, which is my end goal.
The program works in the following way: use mathematical tools to group the image into similarly colored regions, and sketch the contours of each region using Fourier transforms.
I downloaded the code and executed it on an example image, but the output was so large I had to send it directly to my clipboard:
python path_to_converter path_to_image | pbcopy
However, this turned out to be a major hurdle. You see, these converters spit out code you have to paste on Desmos's console to take effect. But when I opened Desmos, the console was greyed out. I tried half an hour of shortcuts and bypasses, but it seemed that Devtools is completely broken on the website. But I'm not the kind of person to give up, so I hosted Desmos myself.
I mean, it wasn't very hard. You can get your own api key here, and the code is just like 10 lines. But you have to appreciate the dedication. The result looked beautiful as well.
Animating a video of the 67 Kid
A video is just a sequence of images presented in rapid succession. To animate videos in Desmos, you animate the individual frames and play them together. First, let's talk about how I got the frames.
First, I used Stacher to download the video. I'm not sure why I didn't screenshot it, but it's probably because Youtube will mess the quality up or my crop will be bad. Then, I cut the parts I wanted and used ezgif to convert it to a 12 fps image sequence. Then, I used ffmpeg to convert the images to .png format, which is the only format accepted by my program:
cd path_to_folder
mkdir pngs
for f in *.jpg; do ffmpeg -i "$f" "pngs/${f%.*}.png"; done
But now, the REAL challenge shows its face. How do I turn an image converter to a batch image converter that also accounts for order?
Well, batch processing is easy. I just altered the code to let it accept a folder, and looped through the images within it with the conversion process.
To display equations in succession in Desmos, you usually use a time slider and conditions. For example, if t increases by one unit per second, and you condition an equation to display only when \(0\leq t \leq \frac{1}{12}\) (yay, the latex rendered!), then it will flash for precisely \(\frac{1}{12}\) seconds. If we have n images and let the \(i^{th}\) one appear when \(\frac{i-1}{12}\leq t \lt \frac{i}{12}\), the video will display correctly.
Adding a slider is easy: you can do it by hand. However, appending conditions to the equations generated is far from trivial. Firstly, the quantity of equations make it impossible to condition by hand. Secondly, we have to alter the python code, which in turn alters the console code, which in turn alters the equations. This is way harder to do than manipulating the equations directly.
Fortunately, Desmos made their api incredibly easy to use. It turns out that, to add an equation in the console, you just add an object with some parameters like color, line width and the Latex for the equation. Because the Javascript format is easy, modifying it using python is easy too, especially in our case where we just need to change the Latex component.
Well, how did I change the Latex? The ezgif conversion numbered the frames, so I looped according to order and added a loop counter to keep track of the current frame. Then, I did some calculations like the one three paragraphs ago.
Of course, there were some hurdles here as well, like being forced to use the latex \left\ and \right\ commands instead of using normal curly brackets, and the backslashes kept escaping each other in a confusing way. But these were relatively easy to sort out.
Mission accomplished, right...?
No.
You see, the results were so big they couldn't fit in my clipboard. I had to send it to a file first:
python path_to_converter path_to_folder --verbose > new_file_path_and_name.js
(the --verbose is just to let it display progress)
The js file was a whopping 278 MB. Which is pretty big I think. I didn't listen to CS class. Anyways, for a while I tried various methods to put the file straight to the console, but none of them worked. Even if they worked, it would be useless because, as my experiments afterwards showed, even the console cannot handle the file, and could process a maximum of a meager 10 frames. So I sliced the file to 10 line per segment (each frame is a giant line in the program):
split -l 10 -d -a 2 js_file des && for f in des*; do mv "$f" "$f.js"; done
And went through all of them, copying them...
cat des05.js | pbcopy
And manually executing and recording the effects of each of the 19 files.
Well, remember when I said that each frame will last for a twelfth of a second? I lied. The processing of the frames was too slow, and they could not appear on time if the slider was too quick. In fact, the processing was so laboursome on my computer that it turned extremely hot and the browser tab consumed an energy of 4.4GB. That is DEFINITELY a lot.
Anyways, after I had gathered the footage, I strung them together using Imovie, and sped it up using ffmpeg.
ffmpeg -i iMovie_file -filter:v "setpts=PTS/105.6" -filter:a "atempo=105.6" -r 30 hyperspeed.mov
I sped it up by a factor of 105.6 because that is the ration between what would normally take to display 10 frames, \(\frac{5}{6}\) seconds, and what it actually took for a file slice, 88 seconds.
All that effort, just for a silly meme.
FINAL RESULT:
desmos animation