![]() |
|
![]() |
|
merely You say this as if it were not a big deal, but losing a century's worth of authentication infrastructure/practises is a Bad Thing which will have large negative externalities. |
![]() |
|
You're being downvoted but I think the comment raises a good question. what will happen when someone gets accused of doctoring their dashcam footage? Or any footage used for evidence.
|
![]() |
|
I expect this type of system to be implemented in my lifetime. It will allow whistleblowers and investigative sources to be discredited or tracked down and persecuted.
|
![]() |
|
Ironically low effort deep fakes might increase trust in organizations that have had the budget to fake stuff since their inception. The losers are 'citizen journalist' broadcasting on Youtube etc.
|
![]() |
|
The paper mentions it uses Diffusion Transformers. The open source implementation that comes up in Google is Facebook Research's PyTorch implementation which is a non-commercial license. https://github.com/facebookresearch/DiT Is there something equivalent but MIT or Apache? I feel like diffusion transformers are key now. I wonder if OpenAI implemented their SORA stuff from scratch or if they built on the Facebook Research diffusion transformers library. That would be interesting if they violated the non-commercial part. Hm. Found one: https://github.com/milmor/diffusion-transformer-keras |
![]() |
|
I like the considerations topic. There’s likely also a an unsaid statement. This is for us only and we’ll be the only ones making money from it with our definition of “safety” and “positive”. |
![]() |
|
So an ugly person will be able to present his or her ideas on the same visual level as a beautiful person. Is this some sort of democratization?
|
![]() |
|
yeah, teeth, tongue movement and lack of tongue shape and the "stretching" of the skin around the cheeks in the images pushed the videos right into the uncanny valley for me.
|
![]() |
|
My first thought was "oh no the interview fakes", but then I realized - what if they just kept using the face? Would I care?
|
![]() |
|
Yeah, even if they just use LLMs to do all the work, or are a LLM themselves, as long as they can do the work I guess. Weird implications for various regulations though. |
![]() |
|
The GPU requirements for realtime video generation are very minimal in the grand scheme of things. Assault on reality itself.
|
![]() |
|
The issue is better phrased as “how will we survive the transition while some folk still believe the video they are seeing is irrefutable proof the event happened?”
|
![]() |
|
Presidential elections are frequently pretty close. Taking the electoral college into account (not the popular vote, which doesn't matter) Donald Trump won the 2016 election by a grand total of ~80,000 votes in three states[0]. Knowing that retractions rarely get viral exposure, it's not difficult to imagine that a few sufficiently-viral videos could swing enough votes to impact a presidential election. Especially when considering that the average person is not up to speed on the current state of the tech, and so has not been prompted to build up the mindset that's required to fend off this new threat. [0] https://www.washingtonpost.com/news/the-fix/wp/2016/12/01/do... |
![]() |
|
I still find the faces themselves to be really obviously wrong. The sound is just off, close enough to tell who is being imitated but not particularly good.
|
![]() |
|
It's interesting to me that some of the long-standing things are still there. For example, lots of people with an earring in only one ear, unlikely asymmetry in the shape or size of their ears, etc.
|
![]() |
|
I find the hairs to be the least realistic, they look elastic, which is unsurprising: highly detailed things like hairs are hard to simulate with good fidelity.
|
![]() |
|
If beautiful people have an advantage in the job market, maybe people will use deepfake technology when doing zoom interviews? Maybe they will use it to alter their accent?
|
![]() |
|
Cool! Now we can expect to see an endless stream of dead president's speeches "LIVE" from the White House. This should end well. |
Meanwhile, yesterday my credit card company asked me if I wanted to use voice authentication for verifying my identity "more securely" on the phone. Surely the company spent many millions of dollars to enable this new security-theater feature.
It begs the question: Is every single executive and manager at my credit card company completely unaware that right now anyone can clone anyone else's voice by obtaining a short sample audio clip taken from any social network? If anyone is aware, why is the company acting like this?
Corporate America is so far behind the times it's not even funny.
---
[a] With apologies to Daft Punk.