![]() |
|
![]() |
| I don't think he was making that comparison. I think this he was more referring to the mentality of "you must be fun at parties" whenever someone speaks up with some concerns about an idea. |
![]() |
| I've seen people use an electric whisk to unreel a string to which the key was attached. Seems less risky than the parachute thing. |
![]() |
| It must have been dramatic, but from a practical point of view wrapping it inside a few layers of paper towel (so that it doesn't kill anyone) would be faster, and easier to target. |
![]() |
| Is it even possible for a a parachute-retarded key to directly hurt someone? I’d be more worried about it surprising someone driving a car or riding a bike and causing an accident |
![]() |
| if i am curious about something and want to learn, i don't want to need to sift through jokes and sarcastic comments. i find joy in learning and people can still be informative and use humor. |
![]() |
| I can’t wrap my head around how that hat drops in a straight line. Between the propeller and any wind, how is that hat not all over the place? |
![]() |
| I can assure you that you have no idea what you're talking about, starting with the fact that you obviously didn't watch the video.
It isn't aiming anything. It isn't adjusting for anything. It's doing so from a stationary point. The ML isn't used for anything other than a simple "is there the thing I was trained to look for within this area?" It's basically a ML version of something one could pretty easily do in OpenCV. There's NOTHING about this useful for aerial bombing, which involves dozens of problems much harder than "this is the spot you should aim for." There are probably dozens of smartphone apps for helping marksmen calculate adjustments that are about a hundred times more complicated, and more useful for (potentially) hurting people, than this. And then there's this Stuff Made Here project where the guy makes a robotic bow that can track objects and hit them no matter where you're aiming: https://www.youtube.com/watch?v=1MkrNVic7pw&pp=ygUTYm93IHRoY... I can't stand people who act like it's reasonable for the government to monitor and harass people for stuff like this. The second our government is harassing him or the SMH guy, I'm moving to Canada. |
![]() |
| You've replied to somebody talking about "if somebody developed (something not in this blog post)" with a long angry rant as if they had imagined the blog post claimed it had developed that thing. |
![]() |
| Yes, no more machine code. Everything was to be written in BASIC. ...how we laughed at that outlandish idea. It was so obvious performance would be... well... what we have today pretty much. |
![]() |
| Be it "AI" or not, these mostly fall under "AI" legistlation, at least in the new EU AI Act. Which is IMHO a better way to legislate than tying laws to specific algorithms d'jour. |
![]() |
| I believe the whole project, and the talk of stores in particular, is humour. At least that's how I read it. I appreciate not everyone has the same sense of humour so that may have passed you by. |
![]() |
| I dream of a world where I merely open my mouth and wish it and the gum just flies down into it, already unwrapped.
You’re working toward this world and I commend you. |
![]() |
| I would hope that we have invented error-free software development by then, though. Otherwise, a small error leading to the wrong coordinates could really ruin your day (or head)... ;) |
![]() |
| i work on roboflow. seeing all the creative ways people use computer vision is motivating for us. let me know (email in bio) if there's things you'd like to be better. |
![]() |
| > Unless perhaps the camera was attached outside their window
I remember it was on the balcony, securely attached. The building simply cited their policy, not any laws nor safety issues. |
![]() |
| What an unexpectedly cool post, I clicked the link thinking it would be "typical dumb", but it ended up being atypically dumb in the greatest way! Fascinating. The author overcame many challenges and wrote about them in a style as if he solved the hardest parts with only a little fiddling. Maybe he's already seasoned in the ML and robotics domains? So much fun to read.
Regarding the Video Object Detection: Why does inference need to be done via Roboflow SaaS?
Is it because the Pi is too underpowered to run a fully on-device solution such as Frigate [0] or DOODS [1]? And presumably a Coral TPU wasn't considered because the author mostly used stuff he happened to have laying around.Can anyone comment contrasting experience with Roboflow? Does it perform better than Frigate and DOODS? Asking for a friend. I totally don't have announcement speakers throughout my house that I want to say "Mom approaching the property", "Package delivered", "Dog spotted on a walk", "Dog owner spotted not picking up after their beast", and so on. That last one will be tricky to pull off. Ah well :) [0] https://github.com/blakeblackshear/frigate/pkgs/container/fr... |
![]() |
| FWIW you can use roboflow models on-device as well. detect.roboflow.com is just a hosted version of our inference server (if you run the docker somewhere you can swap out that URL for localhost or wherever your self-hosted one is running). Behind the scenes it’s an http interface for our inference[1] Python package which you can run natively if your app is in Python as well.
Pi inference is pretty slow (probably ~1 fps without an accelerator). Usually folks are using CUDA acceleration with a Jetson for these types of projects if they want to run faster locally. Some benefits are that there are over 100k pre-trained models others have already published to Roboflow Universe[2] you can start from, supports many of the latest SOTA models (with an extensive library[3] of custom training notebooks), tight integration with the dataset/annotation tools that are at the core of Roboflow for creating custom models, and good support for common downstream tasks via supervision[4]. [1] https://github.com/roboflow/inference [2] https://universe.roboflow.com |
![]() |
| This would be interesting, feel free to email me if you get stuck. If you had a camera at eye level, you could try to train it on recognizing the player jersey numbers. |
![]() |
| should give this to the coach too - Texas players get heat exhaustion
Trace and hudl use shirt number and person tracking. I bet they could add skin color and gait analysis to do this as well. |
![]() |
| It can't be the best. It's only one of many positive consequences. Not even a main justification, but only a point of defense for those so irrationally against the concept. |
![]() |
| > Picture a world where you can walk around New York City and everything you need is falling out of windows onto you.
A funny way of criticizing something. Great commentary. |
![]() |
| Once superintelligence takes over all jobs, as it is claimed will happen (, and there is an AIBI : AI Basic Income), I hope we are free to do more such projects :) |
![]() |
| > My dream is for all the city windows to be constantly dropping things on us all the time. You will need a Raspberry Pi...
A Raspberry Pi would hurt quite a bit, depending on the floor! |
It seems I'm in a minority thinking this is not that great... wind can blow the hat (or the thing from the generalized idea) into traffic, or onto a baby, or any other place to upset people. Also, if the recipient can't/doesn't pick the thing up, then it's littering. From the technical perspective finding heads in a video is not that impressive nowadays... So, I don't get all the excitement...