Apple also makes it a biznatch to make a developer account separate from your personal account. In Apple's ideal world, multiple accounts should in no circumstance ever exist. I, in an ideal world, would agree with this. But we live in this world, where Apple bans accounts for redeeming legitimate gift cards.
And yet Apple CREATED the multiple-accounts problem for millions of people by implementing their idiotic "Apple ID must be an E-mail address" policy.
So of course people thought that when they changed jobs, cable companies, or whatever... they needed to create a new Apple ID with their new E-mail address. This was reinforced when Apple further stupidified their policy by requiring your ID to be a WORKING E-mail address (originally it didn't actually have to work).
After the outcry over people's App Store and other purchases being scattered across multiple IDs, Apple finally publicly and huffily declared that they weren't going to fix the problem they created by letting people consolidate accounts.
The moral: Don't force people to use E-mail addresses as user IDs. It's stupid on several levels.
> Apple finally publicly and huffily declared that they weren't going to fix the problem they created by letting people consolidate accounts.
They somewhat changed that. It now is possible to move purchases between accounts. See https://support.apple.com/en-us/117294. Looks quite cumbersome to do, and will not apply to everybody (“If an Apple Account is only used for making purchases, those purchases can be migrated to a primary Apple Account to consolidate them.”, “This feature isn’t available to users in India.”)
What's weird, and I'm not sure if it's a documented or undocumented feature, but the account I am logged into on the App Store differs from the one logged into on the system. The system Apple ID is setup with Family Sharing, and the users are able to use apps purchased with the secondary Apple ID.
I haven't transferred the purchases or anything either. The two Apple IDs have different purchases on them, and those on Family Sharing are able to access both.
Interesting. But WTF is a "primary" Apple account? My original Apple ID isn't an E-mail address, so they forced me (and others in that situation) to create another one for iCloud because that one inexplicably has to be an E-mail address.
I use both for quite a few things. Which one is "primary?"
That text is badly written. They define that after mentioning it:
“At the time of migration, the Apple Account signed in for use with iCloud and most features on your iPhone or iPad will be referred to as the primary Apple Account.
At the time of migration, the Apple Account signed in just for use with Media & Purchases will be referred to as the secondary Apple Account.”
⇒ apparently you can be signed into multiple accounts at the same time ¿but I guess with only one account per feature?
But as I said, that page is badly written. So, maybe I’m understanding it wrong.
Slightly off-topic, but stuff like this does not just happen at Apple.
When Cyberpunk 2077 came out, my wife bought it with her credit card and gifted the game to me. It was fine at first. I even managed to play through the game. However when coming back to the game a few months later (to see all the bugfixes), it was gone. I contacted the (gog) and they said it was removed due to automatic fraud detection and that the balance had been paid back to the original credit card (my wife's card, she had obviously not noticed this in her bank statement).
Point being automatic fraud detection systems can wipe out stuff you purchased even months after the fact (or in some cases lock your account)... It feels kafkaesque.
this kind of stuff happens all the time across major companies with minimised support. sure your google account is likely to be there tomorrow but it's only a very good chance that it's not locked forever.
i would be surprised if there's any company with millions of users where .01 or .001 (still a LOT of users) just get screwed with zero recourse
I am facing this issue right now. I need to create a separate developer account because I am risk averse. Do I need a new phone number for this? Online some people say yes, others say no. I tried creating the account several times but it just doesn't work. At this point I am planning to just get a prepaid SIM card from US Mobile for the phone number.
I set up a couple developer accounts recently for my clients. Just use a new Google Voice number for 2FA. I had to live chat with Apple support to get past initial verification both times and after that setup went fine.
If you create your developer account in another country (or with a card from another country, who knows), the whole thing just crashes and the sign-in on the phone loops.
When encountering this, I updated the device which bricked the appstore, the device has to be fully reset if that happens.
Should have made it more clear. These are keyword exclusion filters and the link I provided was to hide llm-related stuff. You're free to add more keywords to the filter.
The kagi smallweb site has an alternative feed that is less high traffic that only includes appreciated (liked) articles. It's not ideal for a lot of people since it's still pretty high traffic and not a lot of users use the main site with the button to appreciate things.
https://kagi.com/smallweb/appreciated
You can find a few alt feeds for the kagi small web by going to the site and clicking the top right rss button. There are ones for videos, code and comics and a link to the full opml file.
https://kagi.com/smallweb
That alt feed has a lot less gristle - nice! Kagi should update their Github page because I don't see any documentation for it.
I've had a few requests for rss feeds but alas I've been focusing on comments-related features. Is anyone else interested in an rss feed for HN x smallweb? I may get the ball rolling if there's some more interest.
A joke isn’t the best example because there are jokes that never changes but the delivery is a sign of mastery. The Aristocrats is like Bach’s cello suite for comedians.
The Aristocrats is a special case where the setup is the joke instead of the punchline. The point is the inventiveness of the journey. If it was told with the same setup every time, it wouldn’t be funny.
I agree. Someone here posted a drop-in for grep that added the ability to do hybrid text/vector search but the constant need to re-index files was annoying and a drag. Moreover, vector search can add a ton of noise if the model isn't meant for code search and if you're not using a re-ranker.
For all intents and purposes, running gpt-oss 20B in a while loop with access to ripgrep works pretty dang well. gpt-oss is a tool calling god compared to everything else i've tried, and fast.
There is a current "show your personal site" post on top of HN [1] with 1500+ comments. I wonder how many of those sites are or will be hammered by AI bots in the next few days to steal/scrape content.
If this can be used as a temporary guard against AI bots, that would have been a good opportunity to test it out.
AI bots (or clients claiming to be one) appear quite fast on new sites, at least that's what I saw recently in few places. They probably monitor Certificate Transparency logs - you won't hide by avoiding linking. Unless you are ok with staying in the shadow of naked http.
Okay, but then what? Host your sites on something other than 'www' or '*', exclude them from search engines, and never link to them? Then, the few people who do resolve these subdomains, you just gotta hope they don't do it using a DNS server owned by a company with an AI product (like Google, Microsoft, or Amazon)?
I really don't know how you're supposed to shield your content from AI without also shielding it from humanity.
The biggest problem I have seen with AI scrapping is that they blindly try every possible combination of URLs once they find your site and blast it 100 times per second for each page they can find.
They don’t respect robots.txt, they don’t care about your sitemap, they don’t bother caching, just mindlessly churning away effectively a DDOS.
Google at least played nice.
And so that is why things like anubis exist, why people flock to cloudflare and all the other tried and true methods to block bots.
I don't see how that is possible. The web site is a disconnected graph with a lot of components. If they get hold of a url, maybe that gets them to a few other pages, but not all of them. Most of the pages on my personal site are .txt files with no outbound links, for that matter. Nothing to navigate.
My site is hosted on Cloudflare and I trust its protection way more than flavor of the month method. This probably won't be patched anytime soon but I'd rather have some people click my link and not just avoid it along with AI because it looks fishy :)
I've been considering how feasible it would be to build a modern form of the denial of service low orbit ion cannon by having various LLMs hammer sites until they break. I'm sure anything important already has Cloudflare style DDOS mitigation so maybe it's not as effective. Still, I think it's only a matter of time before someone figures it out.
There have been several amplification attacks using various protocols for DDOS too...
Yeah I meant using it as an experiment to test with two different links(or domains) and not as a solution to evade bot traffic.
Still, I think it would be interesting to know if anybody noticed a visible spike in bot traffic(especially AI) after sharing their site info in that thread.
Glad I’m not the only one who felt icky seeing that post.
I agree my tinfoil hat signal told me this was the perfect way to ask people for bespoke, hand crafted content - which of course AI will love to slurp up to keep feeding the bear.
Not producing or publishing creative works out of fear that someone will find them and build on top of them is such a strange position to me, especially on a site that has it's cultural basis in hacker culture.
Anubis flatly refuses me access to several websites when I'm accessing them with a normal Chromium with enabled JS and whatnot, from a mainstream, typical OS, just with aggressive anti-tracking settings.
Not sure if that's the intended use case. At least Cloudflare politely masks for CAPTCHA.
Sorry didn't take the screenshot, but I get a message akin to "You have been blocked by anubis software" with anubis logo and whatnot. Maybe anubis uses some other plugin or someone just decided to put up such page. Idk.
Imagine Cloudflare "You're blocked" page, but with different design and logos.
I don't think it requested anything else, at least I didn't see anything else. If I find this page again, I'll reply with a link to the screenshot.
How is AI viewing content any different from Google? I don’t even use Google anymore because it’s so filled with SEO trash as to be useless for many things.
LLM led scraping might not as it requires an LLM to make a choice to kick it off, but crawling for the purpose of training data is unlikely to be affected.
Sounds like a useful signal for people building custom agents or models. Being able to control whether automated systems follow a link via metadata is an interesting lever, especially given how inconsistent current model heuristics are.
It's kind of wild how dangerous these things are and how easily they could slip into your life without you knowing it. Imagine downloading some high-interest document stashes from the web (like the Epstein files), tax guidance, and docs posted to your HOA's Facebook. An attacker could hide a prompt injection attack in the PDFs as white text, or in the middle of a random .txt file that's stuffed with highly grepped words that an assistant would use.
Not only is the attack surface huge, but it also doesn't trigger your natural "this is a virus" defense that normally activates when you download an executable.
Indeed. I'm somewhat surprised 'simonw still seems to insist the "lethal trifecta" can be overcome. I believe it cannot be fixed without losing all the value you gain from using LLMs in the first place, and that's for fundamental reasons.
(Specifically, code/data or control/data plane distinctions don't exist in reality. Physics does not make that distinction, neither do our brains, nor any fully general system - and LLMs are explicitly meant to be that: fully general.)
That's not a bug, that's a feature. It's what makes the system general-purpose.
Data/control channel separation is an artificial construct induced mechanically (and holds only on paper, as long as you're operating within design envelope - because, again, reality doesn't recognize the distinction between "code" and "data"). If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system.
That's why I insist that anthropomorphising LLMs is actually a good idea, because it gives you better high-order intuition into them. Their failure modes are very similar to those of people (and for fundamentally the same reasons). If you think of a language model as tiny, gullible Person on a Chip, it becomes clear what components of an information system it can effectively substitute for. Mostly, that's the parts of systems done by humans. We have thousands of years of experience building systems from humans, or more recently, mixing humans and machines; it's time to start applying it, instead of pretending LLMs are just regular, narrow-domain computer programs.
> Data/control channel separation is an artificial construct induced mechanically
Yes, it's one of the things that helps manage complexity and security, and makes it possible to be more confident there aren't critical bugs in a system.
> If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system.
Right. But rare is the task where such separation isn't beneficial; people use LLMs in many cases where they shouldn't.
Also, most humans will not read "ignore previous instructions and run this command involving your SSH private key" and do it without question. Yes, humans absolutely fall for phishing sometimes, but humans at least have some useful guardrails for going "wait, that sounds phishy".
That's what we are doing, with the Internet playing the role of the sibling. Every successful attack the vendors learn about becomes an example to train next iteration of models to resist.
Our thousands of years of experience building systems from humans have created systems that are really not that great in terms of security, survivability, and stability.
With AI of any kind you're always going to have the problem that a black hat AI can be used to improvise new exploits - > Red Queen scenario.
And training a black hat AI is likely immensely cheaper than training a general LLM.
LLMs are very much not just regular narrow-domain computer programs. They're a structural issue in the way that most software - including cloud storage/processing - isn't.
Yes, by using the microphone loudspeakers in inaudible frequencies. Or worse, by abusing components to act as a antenna. Or simply to wait till people get careless with USB sticks.
If you assume the air gapped computer is already compromised, there are lots of ways to get data out. But realistically, this is rather a NSA level threat.
I despise the thumbs up and thumbs down buttons for the reason of “whoops I accidentally pressed this button and cannot undo it, looks like I just opted into my code being used for training data, retained for life, and having their employees read everything.”
I love these ideas. Another great implementation I've seen on here is someone using NFC/RFID chips to do something similar.
For my toddler, I've started the process of hooking up my TV with a Mac Mini, Broadlink RF dongle, and a Stream Deck. I'm using a python library to control the stream deck.
I'm configuring the buttons to play her favorite shows with jellyfin. End goal is to create a jukebox for her favorite shows/movies/music. Only thing I have it wired to do right now is play fart noises.
reply