I went cross-eyed on my screenshot, and I couldnt read the word, but I did notice some artifacts
JadeNB 2 hours ago [-]
This was used in some early-20th-century astronomical setting, I think to detect supernovae. I can't find any documentation now, but my memory is that it was called "blink testing" or something similar, where one switched rapidly between two images of a star field so that changes due to a supernova would stand out.
What does that accomplish? You can just read the web page as-is...
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
azinman2 3 hours ago [-]
You could also just record a video.
bonyt 1 hours ago [-]
I've found taking two screenshots and adding them as separate layers works well, and then setting one as Difference, and then tweaking the opacity.
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
aw1621107 2 hours ago [-]
That's reminiscent of a (possibly apocryphal?) method I once read about to get "clean" images of normally crowded public places - take multiple photos over time, then median each pixel. Never had the opportunity to try it myself, but I thought it sounded plausible as a way to get rid of transient "noise" from an otherwise static image.
But it only works well if the crowds move out of the way reasonably quickly. If we're taking about areas packed with people all blocking a certain area, and you need hours of shots, the change in ambient lighting over time will have negative effects on the end photo.
postalcoder 9 hours ago [-]
Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
apricot 4 hours ago [-]
> pasting images of seemingly random noise into a sensitive environment is a terrible idea
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
xnx 2 hours ago [-]
I played the game, but had no idea about that feature.
I'm wondering. Can we also come up with something the other way around? Text you cannot read, unless you take a screenshot?
adrianmonk 2 hours ago [-]
You could probably do it with timing tricks related to video refresh. Wait until the monitor has finished refreshing, then draw the text into the framebuffer. Leave the text there a short while, but erase it before the monitor starts refreshing again. Repeat.
The screenshot would have a chance of capturing the text, depending on exactly when the screenshot pulls pixel data out of the framebuffer.
This might not work on certain devices. You need access to the refreshing timing information. The capture mechanism used for screenshots might also vary.
pmontra 9 hours ago [-]
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
gwbas1c 6 hours ago [-]
It's just adding friction: Someone determined will figure out a way to get the text.
Sometimes friction is enough.
jonathaneunice 6 hours ago [-]
Or the same one.
While a screencap image hides the message, a screencap video shows it perfectly well.
On iPhone: screenrecord. Take screenshots every couple seconds. Overlay images with 50% transparency (I use Procreate Pocket for this part)
HarHarVeryFunny 1 hours ago [-]
A single photo is good enough as long as the exposure time is long enough to capture the motion blur.
CaptainOfCoit 8 hours ago [-]
On Android: Take a look at the URL, see the text in plain-text :)
catlifeonmars 5 hours ago [-]
Nice. I did not think to look there.
birdman3131 1 hours ago [-]
So windows 11 easily bypasses this when taking a screenshot. Just switch to video mode. (Yeah yeah. Not technically a screenshot but same default software built in to windows.)
NKosmatos 6 hours ago [-]
Nice one, the good (great?) thing is that you can save this as a plain old html and you've got the whole code :-)
It hasn't got any type of license included or any other info as comments, so perhaps the creator or the OP can let us know.
Anonyneko 9 hours ago [-]
As soon as I read the title I knew it would be akin to "Bad Apple that disappears when you pause it"
This one is actually more sophisticated because it doesn't rely on scrolling pixels like the OP. So the object doesn't just disappear in screenshots, but also when the animation stops moving! So you can't actually display text that stands still, like the "hello" in the OP.
shannifin 2 hours ago [-]
Yep. He tries text in another video by flipping pixels for one or more frames, so the words disappear very quickly. Definitely harder to read, especially longer words: https://youtu.be/EDQeArrqRZ4
optionalsquid 6 hours ago [-]
I'm not sure I follow. Couldn't you display text that stands still by (re)drawing the outline of the text repeatedly? It would essentially be a two frame animation
derefr 2 hours ago [-]
I think the algorithm in the video is doing a very specific thing where there's a zero-width pixel-grid-clamped stroke (picture an etch-a-sketch-like seam carving "between" the bounds of pixels on the grid) moving about the grid, altering (with XOR?) anything it advances across.
So, sure, you could try to implement this by having a seam that is made to "reverberate" back and forth "across" the outlining pixels of a static shape on each frame. But that's not exactly the same thing as selecting the outline of the shape itself and having those pixels update each frame. Given the way this algorithm looks to work, pushing the seam "inwards" vs "outwards" across the same set of pixels forming the outline might gather an entirely different subset of pixels, creating a lot of holes or perhaps double-counting pixels.
And if you fix those problems, then you're not really using this algorithm any more; you're just doing the much-more-boring thing of taking a list of pixel positions forming the outline and updating them each frame. :)
cubefox 5 hours ago [-]
I believe the algorithm in the video works by flipping the pixel color when the pixel changes from foreground (some shape) to background, or from background to foreground. If the shape doesn't move, there is no such change, so it disappears.
In the OP the foreground pixels continuously change (scrolling in this case) while the background doesn't change. That's a different method of separating background and foreground.
Aeolun 10 hours ago [-]
This should have an epilepsy warning. Or something of that kind. It certainly made me feel sick.
injidup 9 hours ago [-]
This is more a curious question for those affected by epilepsy. If you know you are triggered by such things how long an exposure is required to trigger an effect. Are you able notice that media may be triggering and simply close it or is exposure and triggering almost instantaneous?
a3w 9 hours ago [-]
I saw the game using this rendering weeks ago, looked okay. Now I saw a font and tried to hold on to the edges while reading it, and yes, somehow this made me more (sea) sick. Strange.
Perhaps faces would be strongest in terms of reaction.
dorianmonnier 10 hours ago [-]
Oh yes please add a warning. My brain is burning right now!
kemayo 16 hours ago [-]
This makes me feel motion-sick, which is kind of impressive because I'm normally not easily susceptible to that.
dylan604 15 hours ago [-]
My eyes went straight into seeing 3D image mode. It's the easiest one I've seen yet! /s
hnlmorg 8 hours ago [-]
Hello fellow person from the 90s. mine eyes did the same too.
RedShift1 14 hours ago [-]
Heh my eyes felt like they started bleeding
quietfox 12 hours ago [-]
"The text disappears..."
And my eyesight with it
marcodiego 4 hours ago [-]
Instead of having the pixels on the letters scrolling down, wouldn't it also work if the pixels were simply re-randomized every frame?
NoahZuniga 3 hours ago [-]
yes
landgenoot 5 hours ago [-]
I think there are usecases for this.
Some countries switched to identity apps instead of plastic identity cards. You could make sensitive data non-screenshottable and non-photographable.
The hotel you are checking in doesn't need to know your DOB, length, SSN, birth place, validity and document number. But they will demand a photo of the ID anyway.
jlokier 4 hours ago [-]
> You could make sensitive data non-screenshottable and non-photographable.
That made me curious, so I took a photo of my laptop screen running this page.
With default camera settingse, the text wasn't visible to me in the photo on my phone screen.
However, setting the exposure time manually to 0.5s, the text came out white on a noisy background and I could easily read it on the phone screen.
I would not be surprised if the default camera settings photo could be processed ("enhance!") to make the text visible, but I didn't try.
landgenoot 4 hours ago [-]
I think it also depends on the response time of the display and even the temperature.
dylan604 16 hours ago [-]
Has anyone tried a long exposure to see if the motion smears into something discernible? Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
Perhaps this technique could be defeated by scrolling the background in the opposite direction as the text
dylan604 5 hours ago [-]
That's what I was expecting to see. I didn't have a mount for my phone handy, to try it. The exporting of frames from a video is a good compromise though. nice one
lodovic 14 hours ago [-]
If you zoom out to 25 % the text is clearly visible and screenshottable.
EvgeniyZh 12 hours ago [-]
Probably the lower frequencies of noise are not matched? Not sure if the frequencies of the order of movement frequency can actually be matched
dasil003 14 hours ago [-]
How do you take a “long exposure” screenshot? Isn’t every screenshot a perfect digital copy of a single frame or a full on video?
dylan604 14 hours ago [-]
Clearly, I meant using a camera, and I'm guessing you knew that too
dice 14 hours ago [-]
Not the parent but that was not at all clear to me. I immediately thought of taking multiple successive instantaneous screenshots and then stacking them. I'm not sure I would have thought of using a camera within a few minutes to an hour, it's not a tool I would ever reach for normally.
catlifeonmars 13 hours ago [-]
I just did this with 50% transparency. It works
viccis 13 hours ago [-]
Also not the parent but how the hell did you not understand what "long exposure" means ffs
rkomorn 12 hours ago [-]
Because the context is about screenshots and context matters
"ffs".
dylan604 3 hours ago [-]
You mean like all of the context I used describing something not a screenshot. Being able to pick up on context clues from the reading is a crucial skill one should have in life. It also makes one look less clueless in conversation when the topics shift quickly and one can keep up.
rkomorn 2 hours ago [-]
None of this warrants the type of response they got, nor your attitude.
viccis 3 hours ago [-]
Periods go inside of quotes, even mealy mouthed shock quotes because an internet abbreviation made you upset.
rkomorn 2 hours ago [-]
Nah it's your attitude that brings nothing worthwhile.
DonHopkins 7 hours ago [-]
Oh, so your screenshot utility has "long exposure" and an "ND" filter and "shutter speed" controls, just like a phone's camera? What kind of screenshot utility simulates optical camera effects? What purpose does that serve? Care to share a link to it?
>Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
On my Chrome-descended browser, the initial screen is populated by something that appears to be some sort of downsampled grid image, resulting in black and white, but also various shades of grey. However the scrolling text is pure black and white. It also seems the canvas is persistent, so the result is that text on the canvas is leaving a shadow for me, where I can still read the shadow. Somehow the initial noise is not coming out as just black and white pixels.
oniony 10 hours ago [-]
I don't see any text: just a scrolling down screen of random black/white pixels.
rnhmjoj 9 hours ago [-]
It seems to depend on reading pixels from a canvas. This is commonly used for fingerprinting users on the web, so you have to disable some privacy plugins.
Neat! I've seen stuff like this that works as a magic eye thing. So you cross your eyes (or make them parallel, depending on the type of image) and it makes a 3d animation appear in front of the page.
cal85 7 hours ago [-]
I’d like to see an example!
leogout 3 hours ago [-]
It also dissapear if you shake your phone (or computer screen but it's harder)
wink 11 hours ago [-]
Doesn't even show anything on LibreWolf, probably disabled WebGL as usual. I thought it was a nice error screen, but apparently it was intended, just without the text :P
creatonez 10 hours ago [-]
Seems to work if you disable canvas fingerprinting protection.
zikero 15 hours ago [-]
Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
xandrius 10 hours ago [-]
I don't think we need more capable people thinking of silly captchas.
15 hours ago [-]
pwdisswordfishz 8 hours ago [-]
Take N screenshots, XOR them pairwise, OR the results, then perform normal OCR.
ranger_danger 2 hours ago [-]
Yes but this is prohibitively expensive for a large bot network to employ.
Wasn't that the whole point of Anubis?
squigz 15 hours ago [-]
As if captchas aren't painful enough for visually impaired users...
creata 8 hours ago [-]
Fun!
I always wanted to make text that couldn't be recorded with a video recorder, but that doesn't seem possible.
Maybe if you knew the exact framerate that the camera was recording at, you could do the same trick, but I don't think cameras are that consistent.
vivegi 14 hours ago [-]
Cool. I used the Windows snipping tool and just screen-recorded it.
jiehong 8 hours ago [-]
I have to admit it's a pretty cool idea.
At first I was worried that there was a (stupid) API in web browsers just like on mobiles to prevent users from screenshotting something by blanking the screen in the screenshot.
Izkata 16 hours ago [-]
Firefox on Android seems to just be a static image, I can't see any text.
creatonez 10 hours ago [-]
Probably the result of canvas fingerprinting protection configured in your `about:config`? With a default profile it seems to work fine on Firefox for Android.
Izkata 7 hours ago [-]
I haven't changed any of that on here.
Looks like I consistently get just the static image when I open in a new tab then switch to it, but then if I refresh the page without switching tabs it'll show the animation.
This would make for a great effect for a technothriller. Like a cyber ransom or something like that.
p0w3n3d 9 hours ago [-]
This idea has made me think of another subject - would it be possible to overload a face / car plate scanning camera by using a pattern, like qr code for exampl? Or a jacket made of qr codes?
It's a nice effect, but I don't think it's usable in practice, because it's not accessible for visually impaired users: not enough contrast between foreground text and background
shrikant 9 hours ago [-]
Could someone please post what this disappeared bit is supposed to look like? Looks legible to me when I screenshot and open in Preview on MacOS 15.6.1 (Firefox).
grumbel 9 hours ago [-]
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
bix6 16 hours ago [-]
Ha cool! How’s it work?
Lalabadie 16 hours ago [-]
The only way to see the text is in the movement. The pattern across any single frame is entirely random noise.
thaumasiotes 13 hours ago [-]
> The pattern across any single frame is entirely random noise.
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
giveita 12 hours ago [-]
Can't it be continuous random noise added at the top and then moved down each frame.
Roughly you create another full size rect. On each frame add a random pixel on row 1 and shift everything down.
Make that rest a layer below the top one which has Hello cut out as transparent.
In any single frame the result is random noise.
thaumasiotes 12 hours ago [-]
You could do that, but that's not what the page is doing.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
ranger_danger 2 hours ago [-]
They didn't mean random noise as in certifiably truly random in a cryptographic sense... nobody cares about that for a silly demo.
Random noise as in a normal non-tech human cannot see anything discernable to them at all, without the motion component.
This could be used for Captcha systems. Would current bots be able to decipher these?
ranger_danger 2 hours ago [-]
Yes, you can make ChatGPT decipher this already.
But doing this on a massive scale would warm the planet.
And it's not friendly accessibility-wise.
db48x 10 hours ago [-]
Sure, but I can just record a video instead. It doesn’t disappear then!
viccis 13 hours ago [-]
For what it's worth, there are some websites that embed some crazy shit when you screenshot. On reddit, r/CenturyClub will fill your background with a slightly off-white version of your username so that they can identify leakers, and I'm not certain how exactly they do it.
elAhmo 11 hours ago [-]
If you blink really fast, the text almost disappears.
altcognito 16 hours ago [-]
Fun side effect: staring at the letters for a bit makes the rest of the image move.
Theodores 4 hours ago [-]
> let textString = `hello`
I think further obfuscation could be possible by uglifying the script and providing a SVG path that stores the text as some vector image.
Self modifying code could be useful too, to delete the SVG data once it is in the canvas.
I fully expect this to still be defeated by AI though, such is my presumption that AI is smarter than me, always. It won't care about uglification and it would just laugh to itself at my humble efforts to defeat Skynet.
Regarding practical applications, nowadays kids sell weed online quite brazenly on platforms such as Instagram. Prostitutes also sell their services on Telegram. It is only a matter of time before this type of usage gets clamped down on, so there may come a time when this approach will be needed to thwart the authorities.
magios 11 hours ago [-]
firefox on linux with a bunch of css stuff set to defaults or none !important shows a static image
EGreg 4 hours ago [-]
This is good but I feel it can somehow be made better!
I like the idea of motion revealing things out of randomness and screenshots are random.
You can just take a screencast though hehe
3 hours ago [-]
bilsbie 6 hours ago [-]
Ultimately people will just take photos of the screen. Seems like you’re just annoying people.
I feel like there’s an ethical issue. If something is on my screen I own it. I know the law doesn’t agree but it feels right to me.
sarreph 6 hours ago [-]
The point is that it's noise and you can't "capture" a still image of the text / information (relies on motion to be viewable).
ranger_danger 2 hours ago [-]
We figured out how to capture video though. And ChatGPT can already decipher this.
cryptoz 16 hours ago [-]
Had a lot of fun trying to break this. Turns out you can screenshot real easily by zooming out. Maybe there are other ways but I stopped trying :)
vunderba 15 hours ago [-]
yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.
sans_souse 16 hours ago [-]
Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
I guess the trick doesn't work on this browser.
dylan604 15 hours ago [-]
I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did
chii 12 hours ago [-]
This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?
vunderba 12 hours ago [-]
hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.
dwg 16 hours ago [-]
Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.
anigbrowl 16 hours ago [-]
It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.
dylan604 15 hours ago [-]
Mine freezes the animation on zoom change. Not sure you could automate against that
anigbrowl 13 hours ago [-]
What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually
domatic1 9 hours ago [-]
but screen recording works :)
tamimio 10 hours ago [-]
In your phone, just record the screen, then drag the player to see how every still pic blend in within the surroundings, but as soon as it moves it shows up.
kps 15 hours ago [-]
The text reappears when I screenshot it twice.
davidgerard 12 hours ago [-]
Screnshotted fine in Xfce.
UltraSane 15 hours ago [-]
Seems trivial to diff multiple screenshots to identify what parts move. Or just use a compression algorithm to do the same.
dazzlevolta 13 hours ago [-]
Would 2 screenshots be enough, I wonder?
boothby 13 hours ago [-]
Yeah, the letters are big enough, an xor shows the text quite clearly.
1oooqooq 8 hours ago [-]
"you cannot screenshot this already illegible mess of white noise"
hbbio 13 hours ago [-]
Coinbase was hacked for $400M when literally someone from outsourced support services was taking screenshots on their phone!
The culprit had more than 10k photos of all security details for thousands of wealthy customers.
gloosx 12 hours ago [-]
If it's even true someone from outsourced support has access to some sensitive security details then using this dumpster is almost like throwing your money out of the window.
Lighten, Screen, Addition, Darken, Multiply, Linear burn, Hard Mix, Difference, Exclusion, Subtract, Grain Extract, Grain Merge, or Luminance.
https://ibb.co/DDQBJDKR
You actually don't need any image editing skill. Here is a browser-only solution:
1. Take two screenshots.
2. Open these screenshots in two separate tabs on your browser.
3. Switch between tabs very, very quickly (use CTRL-Tab)
Source: tested on Firefox
Thank you forever for this, I ever used Ctrl-Page up/down for that.
Are you going to share your two screenshots, and provide those instructions, with others? That seems impractical.
Video recording is a bit less impractical, but there you really need a short looping animation to avoid ballooning the file size. An actual readable screenshot has its advantages...
Here it is in Pixelmator Pro: https://i.moveything.com/299930fb6174.mp4
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
https://digital-photography-school.com/taking-photos-in-busy...
https://petapixel.com/2019/09/18/how-to-shoot-people-free-ph...
But it only works well if the crowds move out of the way reasonably quickly. If we're taking about areas packed with people all blocking a certain area, and you need hours of shots, the change in ambient lighting over time will have negative effects on the end photo.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
BLIT protection. https://www.infinityplus.co.uk/stories/blit.htm
Only if your rendering libraries are crap.
They even provide the source code for the effect:
https://github.com/brantagames/noise-shader
It reminds me of the mid-1990s video game Magic Carpet.
https://en.wikipedia.org/wiki/Magic_Carpet_(video_game)
This was a pseudo-3D game and on an ordinary display it used perspective to simulate 3D like most games. If you had 3D goggles it could use them, but I didn't.
However, it could do a true 3D display on a 2D monitor using a random-dot stereogram.
https://en.wikipedia.org/wiki/Random_dot_stereogram
If you have depth perception and are able to see RDS autostereograms, then Magic Carpet did an animated one. It was a wholly remarkable affect, but for me anyway, it was really hard to watch. It felt like it was trying to rotate my eyeballs in their sockets. Very impressive, but essentially unplayable and I could only watch for a minute or two before I couldn't stand the discomfort any more.
Also playable in the browser: https://playclassic.games/games/action-dos-games-online/play...
https://silverspaceship.com/static/
Really clever use of a TV remote as controller.
https://upload.wikimedia.org/wikipedia/en/a/ab/AnyMinuteNow....
https://www.youtube.com/watch?v=Bg3RAI8uyVw
The effect is disrupted by introducing rendering artifacts, by watching the video in 144p or in this case by zooming out.
I'd love to know the name of this effect, so I can read more about the fMRI studies that make use of it.
What I've found so far:
Random Dot Kinematogram
Perceptual Organization from Motion (video of Flounder camouflage)
https://www.youtube.com/watch?v=2VO10eDIyiE
The screenshot would have a chance of capturing the text, depending on exactly when the screenshot pulls pixel data out of the framebuffer.
This might not work on certain devices. You need access to the refreshing timing information. The capture mechanism used for screenshots might also vary.
Sometimes friction is enough.
While a screencap image hides the message, a screencap video shows it perfectly well.
On iPhone: screenrecord. Take screenshots every couple seconds. Overlay images with 50% transparency (I use Procreate Pocket for this part)
https://www.youtube.com/watch?v=bVLwYa46Cf0
And another version of this, using apples instead of white noise
https://www.youtube.com/watch?v=r40AvHs3uJE
So, sure, you could try to implement this by having a seam that is made to "reverberate" back and forth "across" the outlining pixels of a static shape on each frame. But that's not exactly the same thing as selecting the outline of the shape itself and having those pixels update each frame. Given the way this algorithm looks to work, pushing the seam "inwards" vs "outwards" across the same set of pixels forming the outline might gather an entirely different subset of pixels, creating a lot of holes or perhaps double-counting pixels.
And if you fix those problems, then you're not really using this algorithm any more; you're just doing the much-more-boring thing of taking a list of pixel positions forming the outline and updating them each frame. :)
In the OP the foreground pixels continuously change (scrolling in this case) while the background doesn't change. That's a different method of separating background and foreground.
Perhaps faces would be strongest in terms of reaction.
Some countries switched to identity apps instead of plastic identity cards. You could make sensitive data non-screenshottable and non-photographable.
A modern variant to the passport anti identity fraud cover: https://merk.anwb.nl/transform/a9b4e52a-9ba1-414b-b199-29085...
The hotel you are checking in doesn't need to know your DOB, length, SSN, birth place, validity and document number. But they will demand a photo of the ID anyway.
That made me curious, so I took a photo of my laptop screen running this page.
With default camera settingse, the text wasn't visible to me in the photo on my phone screen.
However, setting the exposure time manually to 0.5s, the text came out white on a noisy background and I could easily read it on the phone screen.
I would not be surprised if the default camera settings photo could be processed ("enhance!") to make the text visible, but I didn't try.
"ffs".
>Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
https://en.wikipedia.org/wiki/Neutral-density_filter
https://en.wikipedia.org/wiki/Shutter_speed
https://unscreenshottable.vercel.app/?text=Bonjour
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
Wasn't that the whole point of Anubis?
I always wanted to make text that couldn't be recorded with a video recorder, but that doesn't seem possible.
Maybe if you knew the exact framerate that the camera was recording at, you could do the same trick, but I don't think cameras are that consistent.
At first I was worried that there was a (stupid) API in web browsers just like on mobiles to prevent users from screenshotting something by blanking the screen in the screenshot.
Looks like I consistently get just the static image when I open in a new tab then switch to it, but then if I refresh the page without switching tabs it'll show the animation.
https://en.wikipedia.org/wiki/Dazzle_camouflage
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
Roughly you create another full size rect. On each frame add a random pixel on row 1 and shift everything down.
Make that rest a layer below the top one which has Hello cut out as transparent.
In any single frame the result is random noise.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
Random noise as in a normal non-tech human cannot see anything discernable to them at all, without the motion component.
Also, it's even harder to read than most captchas.
But fun idea, it was nice to see.
But doing this on a massive scale would warm the planet.
And it's not friendly accessibility-wise.
I think further obfuscation could be possible by uglifying the script and providing a SVG path that stores the text as some vector image.
Self modifying code could be useful too, to delete the SVG data once it is in the canvas.
I fully expect this to still be defeated by AI though, such is my presumption that AI is smarter than me, always. It won't care about uglification and it would just laugh to itself at my humble efforts to defeat Skynet.
Regarding practical applications, nowadays kids sell weed online quite brazenly on platforms such as Instagram. Prostitutes also sell their services on Telegram. It is only a matter of time before this type of usage gets clamped down on, so there may come a time when this approach will be needed to thwart the authorities.
I like the idea of motion revealing things out of randomness and screenshots are random.
You can just take a screencast though hehe
I feel like there’s an ethical issue. If something is on my screen I own it. I know the law doesn’t agree but it feels right to me.
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
I guess the trick doesn't work on this browser.
The culprit had more than 10k photos of all security details for thousands of wealthy customers.