Here's a title you can reuse freely for the next decade or so.
(Startup/public/private equity owned) company <IOT device>'s collect data you don't want them collecting, use it for profit to your detriment, and didn't bother securing any of it because they don't care.
Each time it happens, it needs to be news to name and shame the companies. Unfortunately, once you've bought the product, it's game over for privacy. So this info needs to be explicitly available for each product/company so that when future buyers are researching, they might be able to stumble upon these articles.
Product reviewers need to explicitly state that the cameras/mics/whatevs of devices have been used for nefarious purposes other than what is advertised on the box.
But we should not just sweep everything under the rug because a couple of nerds "knows about it" because there's a heck of a lot more people that do not.
Oh the interesting part is “our AI could not interpret images of common objects at unusual angles”.
Now that’s fascinating - why not? Is computer vision just boring pattern recognition and really does not have “concepts” underlying it - if so 90% of the AI hype is false?
What ? Stable diffusion doesn't have an underlying understanding that humans typically have two arms, two hands and five fingers per hand gathered from vast sea of training data ? That's a bold statement.
IIRR it’s a debate as to the difference between 99% of the time
It predicts the next pixel will be fleshy and the pixel next to it is background this making something that looks fingery (and so when presented with
An odd angle that 99% drops crazily” or that somehow there is a executive function that has evolved that gets a concept of finger with movement, musculature etc
It’s the “somehow evolved” part that is where I have my concerns.
Predictive ability based on billions images, sounds good. Executive function - how does that work? But at some point we are playing “what is consciousness” games.
Would love to hear more rigourous thought than mine - any links gratefully received:-)
I actually agree with you. I was a bit sarcastic. If I understand correctly there isn't a fundamental difference when it comes to text output vs pixel data output in this context. If so then it suddenly sounds much more of a stretch (intuitively) to claim that somehow stable diffusion understands the real world (like people claim to be the case with language models).
Tech progress at its finest. I stick to my 90s-made fridge, samey vintage washing machine, non-smart vacuum and non-smart microwave. All solved and sturdy appliances. Cheers.
Just to clarify: The photos and audio collection isn't related to the mentioned security flaws. These are two separate issues.
> Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws...
> An Ecovacs spokesperson confirmed the company uses the data collected as part of its product improvement program to train its AI models.
At least I know I'm right to avoid anything with a camera on it. You're not crazy if they're after you. I also try to avoid chinese products, but we all know that's not completely possible anymore.
And what repercussions does a company like that have to fear? None.
Legislation worldwide needs to catch up with tech badly.
Is this news anymore?
Here's a title you can reuse freely for the next decade or so.
(Startup/public/private equity owned) company <IOT device>'s collect data you don't want them collecting, use it for profit to your detriment, and didn't bother securing any of it because they don't care.
Each time it happens, it needs to be news to name and shame the companies. Unfortunately, once you've bought the product, it's game over for privacy. So this info needs to be explicitly available for each product/company so that when future buyers are researching, they might be able to stumble upon these articles.
Product reviewers need to explicitly state that the cameras/mics/whatevs of devices have been used for nefarious purposes other than what is advertised on the box.
But we should not just sweep everything under the rug because a couple of nerds "knows about it" because there's a heck of a lot more people that do not.
Name and shame doesn't work. What does work is prison terms for ceos.
*Prison terms for the board of directors.
In the event of incidents that result in mass deaths (oil spills, etc), tried as if mass murderers for the board.
CEOs are too often just scapegoats for the evils of a board.
I mean I've seen it go both ways, but sure, the board should know what the ceo is doing and vice versa.
> What does work is prison terms for ceos.
Could work. But, unfortunately those CEOs make the law.
> Each time it happens, it needs to be news to name and shame the companies.
Was this ever a problem for Microsoft, Google or Apple ? /s
Oh the interesting part is “our AI could not interpret images of common objects at unusual angles”.
Now that’s fascinating - why not? Is computer vision just boring pattern recognition and really does not have “concepts” underlying it - if so 90% of the AI hype is false?
There must be several phds in that at least :-)
What ? Stable diffusion doesn't have an underlying understanding that humans typically have two arms, two hands and five fingers per hand gathered from vast sea of training data ? That's a bold statement.
I think the issue is “understanding”
IIRR it’s a debate as to the difference between 99% of the time It predicts the next pixel will be fleshy and the pixel next to it is background this making something that looks fingery (and so when presented with An odd angle that 99% drops crazily” or that somehow there is a executive function that has evolved that gets a concept of finger with movement, musculature etc
It’s the “somehow evolved” part that is where I have my concerns.
Predictive ability based on billions images, sounds good. Executive function - how does that work? But at some point we are playing “what is consciousness” games.
Would love to hear more rigourous thought than mine - any links gratefully received:-)
I actually agree with you. I was a bit sarcastic. If I understand correctly there isn't a fundamental difference when it comes to text output vs pixel data output in this context. If so then it suddenly sounds much more of a stretch (intuitively) to claim that somehow stable diffusion understands the real world (like people claim to be the case with language models).
> and five fingers per hand
In my experience it's more like three to six. But your argument's still valid. There is a concept
Tech progress at its finest. I stick to my 90s-made fridge, samey vintage washing machine, non-smart vacuum and non-smart microwave. All solved and sturdy appliances. Cheers.
Just to clarify: The photos and audio collection isn't related to the mentioned security flaws. These are two separate issues.
> Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws... > An Ecovacs spokesperson confirmed the company uses the data collected as part of its product improvement program to train its AI models.
https://dontvacuum.me/
Privacy respecting vacuum robots? Why not Valetudo?
https://valetudo.cloud/
That’s what Valetudo uses.
aside: W1XM is a cool callsign. I'm sure it elicits the satellite radio company, but to me it's https://en.wikipedia.org/wiki/XM_(file_format)
also it's whisky one xray mike, which rolls off the tongue
At least I know I'm right to avoid anything with a camera on it. You're not crazy if they're after you. I also try to avoid chinese products, but we all know that's not completely possible anymore.