OracleGPT: Thought Experiment on an AI Powered Executive

(senteguard.com)

58 points | by djwide 1 day ago

8 comments

  • alexpotato 21 hours ago
    You sometimes hear people say "I mean, we can't just give an AI a bunch of money/important decisions and expect it to do ok" but this is already happening and has been for years.

    Examples:

    - Algorithmic trading: I once embedded on an Options trading desk. The head of desk mentioned that he didn't really know what the PnL was during trading hours b/c the swings were so big that only the computer algos knew if the decisions were correct.

    - Autopilot: planes can now land themselves to an accuracy that is so precise that the front landing gear wheels "thud" as they go over the runway center markers.

    and this has been true for at least 10 years.

    In other words, if the above is possible then we are not far off from some kind of "expert system" that runs a business unit (which may be all robots or a mix of robots and people).

    A great example of this is here: https://marshallbrain.com/manna1

    EDIT: fixed some typos/left out words

    • mjr00 21 hours ago
      > A great example of this is here: https://marshallbrain.com/manna1

      This is a piece of science fiction and has its own (inaccurate, IMO) view on how minimum wage McDonald's employees would react to a robot manager. Extrapolating this to real life is naive at best.

      • pixl97 20 hours ago
        >Extrapolating this to real life is naive at best.

        Why, it's as much of a view of our past adherence to technology without thinking as a well as a view of the future.

        "Computer says no" is a saying for a reason.

        • nirav72 20 hours ago
          >"Computer says no" is a saying for a reason.

          Current LLMs rarely or seldom say no. Unless, they're specifically configured to block out certain types of requests.

    • altmanaltman 4 hours ago
      "Expert system" running a company is never going to happen unless shareholders are okay with no accountability from the company. You'll always need someone to blame in case things go wrong. You could have an executive using such an "expert system" for literally all their decisions, but it has to be a human being signing off on those decisions. There is no way to prosecute code and unless these expert systems can become sentinent or appear in court, best of luck trying to let it run a company in the real sense of actually making those decisions with full autonomy and responsbility.
    • pavel_lishin 20 hours ago
      But none of those things are AI in the same sense that we use the term now, to refer to LLMs.
      • alexpotato 20 hours ago
        But those things were considered on the same level of current LLMs in the sense of "well, a computer might do part of my job but not ALL of it".

        No, algorithmic trading didn't replace everything a trader did but it most certainly replaced large parts of the workload and made it much faster and horizontally scalable.

        • happymellon 6 hours ago
          The problem here is that you are cherry picking examples of successful technology.

          The inverse would be to list off Theranos, Google Stadia, and other failed tech and claim that people said that there was massive steps that subsequently didn't materialise. In fact a lot of times it was mostly fabricated by people with stuff to gain from ripping off VCs.

          Look at how bad it is with Microsoft in Windows despite their "all in on AI".

          Ultimately no one really knows how it will pan out, and if we will end up with Enron or an Apple. Or even if it's a combination of a successful tech that ultimately is mishandled by corporations and fails, or a limited tech that regardless captures the imagination through pop culture and takes over.

        • exsomet 18 hours ago
          The two key differences to me are infrastructure and specificity of purpose.

          Autoland in plane requires a set of expensive, complex, and highly fine-tuned equipment to be installed on every runway in the world that enables it (which as a proportion is statistically not a majority of them).

          And as to specificity, this system does exactly one thing - land a specific model of plane on a specific runway equipped with instrumentation configured a specific way.

          The point being: it isn’t a magic wand. Any serious conversation of AI in these types of life or death situations has to recognize that without the corresponding investment in infrastructure and specificity of purpose, things like this blog post are essentially just science fiction. The fact that previous generations of technology considered autoland and algorithmic trading to be magic doesn’t really change anything about that.

    • Guvante 12 hours ago
      You gave examples of feedback loops.

      We know very well how to train computers to handle those effectively.

      Anything without quick feedback is much more difficult to do this way.

    • djwide 20 hours ago
      I'm saying there's something structurally different form autonomous systems generally and from an LLM corpus which has all of the information in one place and at least in theory extractable by one user.
    • kekqqq 18 hours ago
      I must say that the book is unrealistic, but it makes a good sci-fi story. Thanks, I read it just now in 80 min.
  • alanbernstein 23 hours ago
    Considering things like Palantir, and the doge effort running through Musk, it seems inconceivable that this is not already the case.

    I think I'm more curious about the possibility of using a special government LLM to implement direct democracy in a way that was previously impossible: collecting the preferences of 100M citizens, and synthesizing them into policy suggestions in a coherent way. I'm not necessarily optimistic about the idea, but it's a nice dream.

    • djwide 23 hours ago
      Thanks for the comment. Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.
    • ativzzz 21 hours ago
      > special government LLM to implement direct democracy

      I like your optimism, but I think realistically a special government LLM to implement authoritarianism is much more likely.

      In the end, someone has to enforce the things an LLM spits out. Who does that? The people in charge. If you read any history, the most likely scenario will be the people in charge guiding the LLM to secure more power & wealth.

      Now maybe it'll work for a while, depending on how good the safeguards are. Every empire only works for a while. It's a fun experiment

    • Sheeny96 23 hours ago
    • Zagitta 21 hours ago
      Centralising it is definitely the wrong way to go about it.

      It'd be much better to train an agent per citizen, that's in their control, and have it participate in a direct democracy setup.

    • stewh_eng 23 hours ago
      Indirectly, this is kind of what I was trying to get at in this weekend project https://github.com/stewhsource/GovernmentGPT using the British commons debate history as a starting point to capture divergent views from political affiliation, region and role. Changes over time would be super interesting - but I never had time to dig into that. Tldr; it worked surprisingly well and I know a few students have picked it up to continue on this theme in their research projects
      • bahmboo 21 hours ago
        That looks very interesting. Could use a demo or examples for us short attention spanned individuals. Would be cool to feed it into TTS or video generation like Sora.
    • zozbot234 20 hours ago
      Real world LLM's cannot even write a proper legal brief without making stuff up, providing fake references and just spouting all sorts of ludicrous nonsense. Expecting them to set policy or even to provide effective suggestions to that effect is a fool's errand.
      • pixl97 20 hours ago
        >Real world politicians cannot even write a proper legal brief without making stuff up, providing fake references and just spouting all sorts of ludicrous nonsense. Expecting them to set policy or even to provide effective suggestions to that effect is a fool's errand.

        This has been a more realistic experience of the average American for the past few years.

  • mellosouls 23 hours ago
    This is an interesting and thoughtful article I think, but worth evaluating in the context of the service ("cognitive security") its author is trying to sell.

    That's not to undermine the substance of the discussion on political/constitutional risk under the inference-hoarding of authority, but I think it would be useful to bear in mind the author's commercial framing (or more charitably the motivation for the service if this philosophical consideration preceded it).

    A couple of arguments against the idea of singular control would be that it requires technical experts to produce and manage it, and would be distributed internationally given any countries advanced enough would have their own versions; but it would of course provide tricky questions for elected representatives in the democratic countries to answer.

    • djwide 23 hours ago
      There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.

      I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.

  • zozbot234 22 hours ago
    The really nice thing about this proposal is that at least now we can all stop anthropomorphizing Larry Ellison, and give Oracle the properly robot-identifying CEO it deserves.
    • Terr_ 21 hours ago
      For those who haven't seen the reference: https://www.youtube.com/watch?v=-zRN7XLCRhc&t=38m27s
      • pocketarc 1 hour ago
        I think I've read the reference a hundred times throughout my time on HN, but had _never_ actually seen it, thank you for the link!
    • kmeisthax 21 hours ago
      But then we'd have to call it LawnmowerGPT
    • jeffrallen 21 hours ago
      I came here for this, am not disappoint. :)

      Best meme in hacker space, thanks /u/Cantrill.

  • johnohara 21 hours ago
    > The President sits at the top of the classification hierarchy.

    Constitutionally, and in theory as Commander-In-Chief, perhaps. But in practice, it does not seem so. Worse yet, it's been reported the current President doesn't even bother to read the daily briefing as he doesn't trust it.

    • handedness 21 hours ago
      It's not an issue of theory-versus-practice.

      You're conflating the classification system, established by EO and therefore by definition controlled by the Executive, with the classified products of intel agencies.

      A particular POTUS's use (or lack thereof) of classified information has no bearing on the nature of the classification system.

    • djwide 20 hours ago
      I point that out a little bit when I refer to agencies being discouraged from sharing information. The CIA may be worried about losing HUMINT data to the NSA for example. You may be referring to them worrying about compartmentalizing the information away from the president as well which you are right happens to some extent now but shouldn't 'in theory'. Maybe it's a don't ask don't tell. I think Cheney blew the cover of an intel asset though.
      • handedness 20 hours ago
        > compartmentalizing the information away from the president as well which you are right happens to some extent now

        This is nothing new, and has been happening since at least the 1940s, to multiple administrations from both parties. Roosevelt, Truman, Kennedy, Nixon, Reagan...and that's just some of the instances which were publicly documented.

    • SoftTalker 20 hours ago
      And the last president couldn't comprehend it.

      <shrug>

  • blibble 23 hours ago
    think we're already there aren't we?

    no human came out with those tariffs on penguin island

  • MengerSponge 23 hours ago
    A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION.
    • unyttigfjelltol 21 hours ago
      Computers are more accountable. You just pull the plug, wipe the system.

      Executives, in contrast, require option strike resets and golden parachutes, no accountability.

      Neither will tell you they erred or experience contrition, so at a moral level there may well be some equivalency. :D

      • sifar 18 hours ago
        >> Computers are more accountable. You just pull the plug, wipe the system.

        I think you are anthropomorphizing here. How does a computer feel when unplugged ? How would a computer take responsibility for its' actions ?

    • notpushkin 22 hours ago
      Let’s assume we live in a hypothetical sane society, and company owners and/or directors are responsible for their actions through this entity. When they decide to delegate management to an LLM, wouldn’t they be held accountable for whatever decisions it makes?
    • toomuchtodo 23 hours ago
      While I have great respect for this piece of IBM literature, I will also mention that most humans are not held accountable for management decisions, so I suppose this idea was for a more just world that does not exist.
      • skirge 22 hours ago
        human CAN and computer CAN NEVER
        • toomuchtodo 22 hours ago
          Accountability is perhaps irrelevant is my point. You can turn off a computer, you can turn off a human. Is that accountability? Accountability only exists if there are consequences, and those consequences matter. What does it mean for them to "matter"?

          If accountability is taking ownership for mistakes and correcting for improved future outcomes, certainly, I trust the computer more than the human. We are never running out of humans incurring harm within suboptimal systems that continue to allow it.

      • lenerdenator 22 hours ago
        I'd say that the fix then is in creating a more just world where leaders are held accountable than to hand it off to something that, by its very nature, cannot be held accountable.
    • deelayman 22 hours ago
      I wonder if that quote is still applicable to systems that are hardwired to learn from decision outcomes and new information.
      • advisedwang 20 hours ago
        LLMs do not learn as they go in the same way people do. People's brains are plastic and immediately adapt to new information but for LLMs:

        1. Past decisions and outcomes get into the context window, but that hasn't actually updated any model weights.

        2. Your interaction possible eventually gets into the training data for a future LLM. But this is incredibly diluted form of learning.

      • svieira 22 hours ago
        What (or who) would have been responsible for the Holodomor if it had been caused by an automated system instead of deliberate human action?
    • nilamo 22 hours ago
      Management is already never held accountable, so replacing them is a net benefit.
  • djwide 1 day ago
    [flagged]
    • djwide 23 hours ago
      Can anyone tell me why the comment gets downvoted. The article is past character count - I have to link.
      • adanto6840 22 hours ago
        Because the comment is just a copy/paste of the content at the URL at the top of the comment; which doesn’t add anything to the discussion, because it’s not discussion at all. It’s just a wall of text, it’s not clear to me why you’re posting it as a comment (vs linking to it), and simply regurgitating copy/pasted external information isn’t helpful or interesting.

        Next time, write a sentence or two of context about what you’re going to link to — who wrote it and when, why it’s interesting, and how/why it’s relevant to the topic at hand.

        There’s almost never a need to copy/paste wholesale external content into an HN comment. Especially true when said content is literally linkable, and actually linked, from your comment!

      • jlund-molfese 23 hours ago
        The reason is that most people interpret the comment as noise. It’s very long, so it takes up a lot of space and makes it harder to find comments by people.

        On HN it’s best to just link to the article, no need to also copy and paste anything in comments except for very short quotes.