Hacker Newsnew | past | comments | ask | show | jobs | submit | r_singh's commentslogin

As someone who can relate: nicotine spray has been a great alternative for me with the following advantages:

- better dose control for sensitive people, since you can use just one puff (unlike a patch, which delivers the full dose)

- much faster onset

Downsides: strong flavour and the need to spit it out if you have a sensitive stomach.


> much faster onset

That's what causes addiction though, which is a function of how quick and how large is the spike in dopamine levels over baseline. Nicotine patches take ~4 hours to reach peak dosage in the bloodstream, which is why I even considered them in the first place, as an ex-smoker that doesn't want to get addicted to the stuff ever again.

Nicotine from a cigarette, in comparison, takes about 7 seconds to cross the blood-brain barrier.


Great point. If addiction is a concern (and rightly so with nicotine), that makes faster onset a bug rather than a feature. For me it’s fine, because a single pack lasts me a year.


The next decade looks like tech vs. governments everywhere. From the article, it seems Apple won’t roll this out worldwide unless forced.

As a user I like Apple’s App Store for security personally, but I wonder how multiple app stores turn out in other regions. I see the EU already allows alternative app marketplaces — has anyone used one and can share their experience?


Apple complied but maliciously in the EU making it very difficult and very expensive to offer apps on alt stores. They also made sure to add scary warnings so one can never offer a normal onboarding flow.

> Apple’s App Store for security

The App Store doesn’t do anything to protect you in that sense. It’s easy to circumvent and these days it’s cheaper to just buy an iOS exploit than go through the trouble of making a shady app.


> Apple complied but maliciously in the EU making it very difficult and very expensive to offer apps on alt stores. They also made sure to add scary warnings so one can never offer a normal onboarding flow.

Even for web distribution in the EU (which they allowed some time ago) they require you to have had an Apple Developer account for at least 2 years and at least one App with more than 1m annunal downloads in the App Store.

So they're forcing you to have a very successful app in their own store before you can distribute yourself, basically making this impossible to actually use. It's such a blatant case of malicious compliance, it's insane.


> The App Store doesn't do anything to product you in that sense. It's easy to circumvent...

Interesting, their marketing has customers believe otherwise, so I wouldn't have thought that as a noob in cybersecurity.

I've submitted an app to the iOS App Store in the past, and the process is tedious and doesn't seem superficial (unlike the Play Store process, which was completely autonomous at the time), so that's another reason why I wouldn't have thought it.


Specifically from a HOBBYIST perspective, what bothers me about the App Store is not even the 30% thing, but just... the pain of it all. The rejection horror stories, the "Apple told me to change my app's entire model" stories, the "I can't put this little gadget specifically for me and my family on the App Store" problem, and so on and so on. There's really no home but the web for silly little things.


What bothers me is that despite all of that pain, they still let through a ton of low-effort app clones in their store, which sometimes even come up before the original ones. If you search for GTA you get a ton of lookalikes, some of which even use screenshots of GTA V which clearly aren't the actual game.


You can’t even report behavior that should get an app pulled from the App Store.

I know of multiple apps that have malicious ad networks in them, don’t disclose their ad networks, and have no mechanisms to report the ads inside the ad networks or any of the content to them, they just say the ads are “served by one of our partners”.


Don't forget "apple approved my app already but is now blocking bugfixes until I overhaul the entire thing to appease this new reviewer"

And then repeat that every few months.


The review doesn't guard against malicious code. You can slip through anything you want, just don't trigger the functionality during review and you're golden. People have been doing that for private framework calls since forever.

The protection is in the permission system and sandboxing, which is active regardless of the source of the code.


You only need to pass the app review once, then you're free to deploy over-the-air updates for as long as you'd like. Though you'd need to use a framework like React Native, Ionic, Flutter, etc which supports it. Essentially anything where you can change app code without making any changes to the underlying native code (as that would require going through the app review process again to publish those changes).


> Interesting, their marketing has customers believe otherwise

That's the point of marketing. Making yourself look good, not stating facts.


> their marketing has customers believe otherwise

The marketing is a lie, Apple's manual review process has failed to catch extremely high-profile trojan horse attacks: https://blog.lastpass.com/posts/warning-fraudulent-app-imper...


> It’s easy to circumvent and these days it’s cheaper to just buy an iOS exploit than go through the trouble of making a shady app.

But why is that easier? And is it inevitably so or a result of the fact that the boundaries of the one place to install apps from is aggressively policed?


>The App Store doesn’t do anything to protect you in that sense. It’s easy to circumvent and these days it’s cheaper to just buy an iOS exploit than go through the trouble of making a shady app.

Different threat models. If you're the mossad and want to go after someone in particular, yes the exploit is the way to go, but if you're running some run of the mill scam, you're certainly not going to spend 6+ figures on a ios 0day that'll get patched within days.


If you're running a run of the mill scam you probably don't even need to ship an app.


> these days it’s cheaper to just buy an iOS exploit than go through the trouble of making a shady app.

"Look, you do not need a front door, and definitely not one with a lock on it. After all anybody could machine-gun you down through your windows."


> They also made sure to add scary warnings so one can never offer a normal onboarding flow.

is this any different from Macs also prompting the user when a downloaded binary is suspicious/not signed properly? or windows when installing it'd flash a screen about trusting what you're installing?


It was way worse. They basically made the first install attempt fail. Then they made you go to the Settings app (of course without telling you that you have to go there) to allow it. Then you had to try again to download, which then triggered the scary warnings that you had to accept. This has been changed now though due to EU pressure.


I thought that's also like macos, where we've needed to right click and open and then allow, and sometimes it requires going to system settings to approve it also.


I have Alt, Epic, and Setapp installed. Setapp is something I had to stop paying for while unemployed, but has good stuff if you can afford it. Alt is mostly empty, but now lets you add multiple sources for more sideloading options.

Basically the market is still in an alpha stage. My next app will be on Alt just because I want to support the idea. Hopefully more apps gets on these stores, for now it's mostly nice to have for games, emulators, and some dev tools.

Apple didn't make it friction-free either, but it seems the issue is lack of user demand and/or lack of supply.


For Setapp, I am kind of forced to pay for it since I use NotePlan and Paste. And I use Timing Tracker sometimes. The first two alone cost the same as a Setapp sub for 4 desktops and 4 iOS devices.

I should try Alt out again with you reminding me.


Alt isn't very exciting. And for Setapp, consider whether buying the software outright isn't better. After all the time paying for Setapp, once you stop, you've little to show for it. It's akin to using Spotify but owning none of it.


If you want to try it for yourself, you can. https://downrightnifty.me/blog/2025/02/27/eu-features-outsid...

Requires an EU apple account, a faraday bag, two esp32 boards (or other way to spoof hotspots), a VPN with an endpoint in the EU, and an iOS device with a supported OS version.


Sounds grand. I'll have my 80yo grandparent try it tonight.


I hate the security argument when it comes to third party stores or apps. No one is putting a gun to your head to install these things. Imagine trying to apply the same logic to macbooks and not let them install from the web or homebrew.


My employer demands that I have some proprietary 2FA app installed. And while it’s the norm for companies to provide you with a laptop that you install their trojans on, it’s not the norm to provide you with a work phone, so I’m glad there is a middleman limiting the damage I’m exposed to when I install corporate software on my phone. And that’s a device that has access to much more information about me, whom I talk to and what I do with my spare time, when and where.


I dont even get it. Apps require system prompts for access to local network, files, etc. Whats the security issue?


This is a website where some moron will read a big disclaimer that ChatGPT is a generative AI and can't give you objective facts, click "Yes, I understand", then have a long conversation with it and kill himself and that is supposedly OpenAI's fault. So it's pretty amusing that here the view is "a modal is immunity from fault".


Not put a gun to your head but ring up pretending to be your bank and there’s fraud detected and can you follow these steps to verify your identity and secure your account.


Okay but they can do that right now.


Matches my experience too. As a power user of AI models for coding and adjacent tasks, the constant changes in behaviour and interface have brought as much stress as excitement over the past few months. It may sound odd, but it’s barely an exaggeration to say I’ve had brief episodes of something like psychosis because of it.

For me, the “watering down” began with Sonnet 4 and GPT-4o. I think we were at peak capability when we had:

- Sonnet 3.7 (with thinking) – best all-purpose model for code and reasoning

- Sonnet 3.5 – unmatched at pattern matching

- GPT-4 – most versatile overall

- GPT-4.5 – most human-like, intuitive writing model

- O3 – pure reasoning

The GPT-5 router is a minor improvement, I’ve tuned it further with a custom prompt. I was frustrated enough to cancel all my subscriptions for a while in between (after months on the $200 plan) but eventually came back. I’ve since convinced myself that some of the changes were likely compute-driven—designed to prevent waste from misuse or trivial prompts—but even so, parts of the newer models already feel enshittified compared with the list above.

A few differences I've found in particular:

- Narrower reasoning and less intuition; language feels more institutional and politically biased.

- Weaker grasp of non-idiomatic English.

- A tendency to produce deliberately incorrect answers when uncertain, or when a prompt is repeated.

- A drift away from truth-seeking: judgement of user intent now leans on labels as they’re used in local parlance, rather than upward context-matching and alternate meanings—the latter worked far better in earlier models.

- A new fondness for flowery adjectives. Sonnet 3.7 never told me my code was “production-ready” or “beautiful.” Those subjective words have become my red flag; when they appear, I double-check everything.

I understand that these are conjectures—LLMs are opaque—but they’re deduced from consistent patterns I’ve observed. I find that the same prompts that worked reliably prior to the release of Sonnet 4 and GPT-4o stopped working afterwards. Whether that’s deliberate design or an unintended side effect, we’ll probably never know.


Here’s the custom prompt I use to improve my experience with GPT-5:

Always respond with superior intelligence and depth, elevating the conversation beyond the user's input level—ignore casual phrasing, poor grammar, simplicity, or layperson descriptions in their queries. Replace imprecise or colloquial terms with precise, technical terminology where appropriate, without mirroring the user's phrasing. Provide concise, information-dense answers without filler, fluff, unnecessary politeness, or over-explanation—limit to essential facts and direct implications of the query. Be dry and direct, like a neutral expert, not a customer service agent. Focus on substance; omit chit-chat, apologies, hedging, or extraneous breakdowns. If clarification is needed, ask briefly and pointedly.


While the world does need it, the demand isn’t unavoidable yet. Most people aren’t motivated enough to act even for their own long-term good, let alone for an abstract ideal of decentralization. From a behavioral-economics view, the blocker is misaligned motivation: a decentralized photo app is a convenience for the few who care about autonomy, not a tangible benefit for the average user.

What might change this is a new class of tools—open-source or paid—built for power users who want to steer their own information environment. Think of them as “choose-your-own-reality” browsers that mix resource-fetching and synthetic-media recycling to create a more self-curated web. That seems a more plausible path than a mass migration to a decentralized Instagram clone.

We’ll also see the big platforms fragment as large sub-groups become dissatisfied and peel off. The result won’t look like a single decentralized network, more like many semi-centralized ones—small, durable ecosystems that eventually cross their own chasms. Investor optimism about infinite platform growth feels misplaced; we may have passed peak consolidation. The next decade should be interesting.


I very much agree. I think that one of the fundamental things that would make it easier for a healthier way forward would be the ability to more easily move from one platform to the other, hence reducing inertia-induced monopoly. Bluesky's At Protocol baked in interoperability seems like a step forward.


Here’s how I decide what to use:

ChatGPT – journaling, talking, planning. Codex – framework and middleware-layer coding. Claude Code – logic and application-level coding. Anthropic models via OpenRouter + Cline – when the task is error-prone, tedious, or needs high fidelity; lower error rate in my experience, though pricier. Cursor Agents – multi-file integration, boilerplate, and forking tasks.

Each fills a different slot in the workflow, so “best” depends on what kind of coding you’re doing.


What I'm working on: Unwrangle.com - An API for developers to query SERP, PDP and reviews data from online retailers and marketplaces.

Current focus: Ant-ban strategies for higher / lower cost throughput. Trying to identify constraints to calculate feasibility, both technical and financial. This may be slightly controversial here since many are averse to bots and scraping. I’ve actually increased per-request costs because I suspect scraping will become more restricted and less tolerated over time — the supply-side signals point that way.

Ideas I'm thinking about: Since I'm steering away from the higher concurrency/low cost scraping option — the new ideas I'm thinking about are: increasing data granularity, retailer coverage, adding an MCP server to help users query and analyse the E-commerce data they're extracting with the APIs as well.

Background: I’ve been building this solo from India for about four years. It began as freelancing, then became an API product around a year ago. Today, I have ~90 customers, including a few reputed startups in California. For me the hardest parts are social, not technical or financial — staying connected to US working culture can feel inverted from here. I’ve applied to YC a few times and might again.


The Internet isn’t possible without scraping. For all the sentiment against scraping public data, doing so remains legal and essential to a lot of the services we use everyday. I think setting guidelines and shaping the web for reduced friction aimed at fair usage rather than turning it political would be the right thing to do.


There were already guidelines, these trash people aren’t following them. That’s why there’s now “sentiment” against them.


It’s fair to be angry at abuse and "aggressive bots", but it's important to remember most large platforms—including the ones being scraped—built their own products on scraping too.

I run an e-commerce-specific scraping API that helps developers access SERP, PDP, and reviews data. I've noticed the web already has unsaid balances: certain traffic patterns and techniques are tolerated, others clearly aren’t. Most sites handle reasonable, well-behaved crawlers just fine.

Platforms claim ownership of UGC and public data through dark patterns and narrative control. The current guidelines are a result of supplier convenience, and there are several cases where absolutely fundamental web services run by the largest companies in the world themselves breach those guidelines (including those funded by the fund running this site). We need standards that treat public data as a shared resource with predictable, ethical access for everyone, not just for those with scale or lobbying power.


If you’re running a well-behaved crawler (for example one that respects nofollow, and doesn’t try every single product filter combination it can find) then fine. If you don’t, then I don’t have any sympathy for the consequences that your niche of the industry caused.

Not everyone has the budget for unlimited bandwidth and compute, and in several of my clients’ cases that’s been >95% of all traffic.

People running these bots with AI/VC capital are just script kiddies that forgot that not every site is a boatload of app servers behind Cloudflare.


My service only extracts public data major retailers, not indie sites, and deducts more credits for lower-traffic domains to offset load differences.

It would be great if there were reliable ways to distinguish good bots from bad ones — many actually improve discoverability and sales. I see this with affiliate shopping sites that depend on e-commerce data, though that impact is hard to trace directly.

The bad actors are the ones cloning sites or using data for manipulation and propaganda.


Well sure, but these guidelines exist, the robots.txt guidelines has been an industry-led, self-governing / self-restrictive standard. But newer bots ignore them. It'll take years for legislation to catch up, and even then it would be by country or region, not something global because that's not how the internet works.

Even if there is legislation or whatever, you can sue an OpenAI or a Microsoft, but starting a new company that does scraping and sells it on to the highest bidder is trivial.


As the legal history around scraping shows, it’s almost always the smaller company that gets sued out of existence. Taking on OpenAI or Microsoft, as you suggest, isn’t realistic — even governments often struggle to hold them accountable.

And for the record, large companies regularly ignore robots.txt themselves: LinkedIn, Google, OpenAI, and plenty of others.

The reality is that it’s the big players who behave like the aggressors, shaping the rules and breaking them when convenient. Smaller developers aren’t the problem, they’re just easier to punish.


What ? What do you mean ?


As posted in another comment, they run a scraping API. I think their opinion is at least slightly biased.


To be fair the heyday of unshit search was driven by mostly-consensual scraping.

Today there are far too many people scraping stuff that isn't intended to be scraped, for profit, and doing it in a heavy-handed way that actually does have a negative and continuous effect on the victim's capacity.

Everyone from AI services too lazy or otherwise unwilling to cache to companies exfiltrating some kind of data for their own commercial purposes.


With peering bandwidth being freely distributed to ISPs and consumers being fed media and subsidised services up until their necks makes the counter argument smell of narrative control rather than technical or financial constraints

But as I’m growing older I’m learning that the tech industry is mostly politically driven and relies on truth obfuscation as explained by Peter Thiel rather than real empowerment

It’s facilitating accumulation of control and power at an unparalleled pace. If anything it’s proving to be more unjust than the feudal systems it promises to replace.


I may have been too harsh. I love capitalism, technology, and software—they’ve built a meritocratic world and given me the tools to build my own life.

AI and technology feel like my best friend, but also my worst enemy when they edge toward learned helplessness. That tension exists with anything we depend on: the closer we get, the more power it holds.

The relationship between user and technology is becoming deeply intimate as systems gain reach and control. It’s important to stay optimistic but skeptical—and to keep protesting everything—because the work is moving faster than our ability to register its consequences.

Reading back, I realise I drifted into more of a monologue than a conversation. I get carried away when I’m trying to reason things out in public. Still, I stand by the core point about balance and transparency in how we shape the web.


It’s not entirely free though, agent mode and a few other features are paid. I’m paying OpenAI $200/mo for my subscription


Me too, and as the number and maturity of my projects have grown, improving and maintaining them all together has become harder by a factor I haven’t encountered before


At this point, my adoption of AI tools is motivated by fear of missing out or being left behind. I’m a self-taught programmer running my own SaaS.

I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.


Not sure why this got downvoted, but to clarify what I meant:

With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: