Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And this seems actually wildly reasonable. It’s actually pretty scammy to take people’s money (or whatever your business model is) for legal or medical advice just to give them whatever AI shits out. If I wanted ChatGPT’s opinion (as I actually often do) I’d just ask ChatGPT for free or nearly-free. Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.


Your comment actually hints at this towards the end but yeah this doesn’t just apply to medical and legal topics.


What if Chat GPT is used as a "power tool" for actual legal / medical professionals?

E.G. Here's the set of documents uploaded by the customer. Here are my most important highlights (notice that document #68 is an official letter from last year informing the person that form 68d doesn't need to be filled in their specific situation). Here are my takeaways. Here's my recommendation. Apporve or change?


In the legal context, AI is still prone to errors. And the legal system (in the US) doesn't have much in the way of guardrails.

Here's a case of several judges getting caught using AI to write error-filled judgements[1]. So it's not just lazy lawyers (several news articles over the past 3 years or so), but the courts themselves!

1 - https://www.reuters.com/sustainability/society-equity/two-fe...


From the article

> The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

I think your use case would probably be fine (although I am not a licensed professional, hahaha). I think they mostly want a licensed professional in the “driver seat,” or at least there to eat the responsibility.

It is sort of funny that coding agents became a thing, in this context. The responsibility-sink is irrelevant I guess because code writing doesn’t generate any responsibility as a byproduct.


Ok, but it didn't mention doc 74 that is actually the most important to the matter at hand so now your patient\client is dead or in jail.

It is only a power tool if it works reliably and predictably. If you have to do all the work anyway then the tool is adding to your burden not saving time and effort.


IMO the professionals themselves are free to do so, in the same way they can choose to skim rather than read, or to ask their 13-year-old nephew to read and summarize a document for them in exchange for a Minecraft gift card: In the end the professional is staking their reputation and applicable license(s) on those third parties. If they all work perfectly nobody ever needs to know -- and if they get it wrong because of that shortcut, the professional deserves to be nailed to the wall for it.


I would expect that to work exactly how I code with it.

I would examine the inferred output, decide if it is fit for purpose, and if so then commit it. If not, I would either refine my prompt or make the fixes to the inferred code myself.

Would this hypothetical legal professional not verify that the inferred output (which is really a prediction) is correct?


As an attorney, I don't see what the value of that would be if I have to double check its work. How else could I verify that the output is correct?


The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.

Giving it a document and asking it about edge cases or things that may be not covered in the document. Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.

In my on cases (writing short fiction), having it act as an editor and identifying grammatical mistakes, contradictory statements, ambiguous sentences, and tone mismatch for a given character has been very helpful... but I don't have it write the short fiction for me.

---

For software where it may be used to generate some material (write a short function that does...) the key is short. Something that I can verify and reason about without too much effort.

However, changes that are of the scope of hundreds of lines are exhausting to review no matter if an LLM or a junior dev wrote them. I would expect that similar things would be the case of several paragraphs or pages of legalese that would need additional levels of reading and reasoning and verifying.

If its too much to reason about and verify - its asking too much.

I'd no more trust an LLM to find citations to cases than I'd trust it to program a lesser known framework (where they've been notorious for hallucinating up functions that don't exist).


>The key is to not give it any agency over the work product but rather have it act as an editor or advisor that can offer suggestions but every thing that goes into the document is typed by human hands.

>Giving it a document and asking it about edge cases or things that may be not covered in the document.

As an attorney, how am I supposed to trust that it gave a proper output on the edge cases without reading the document myself?

>Asking it for various ways that one could argue against a given pleading and then considering ways that those could be headed off before they could even be raised.

Do people think attorneys don't know how to do their day-to-day jobs? We generally do not have issues coming up with how to argue against a pleading. Maybe if you're some sort of small-time generalist, working on an issue you hadn't before, but that's not most attorneys. And then, I'd be worried. You are basically not capable of having the expertise needed to verify the model's output for correctness anyway. This is why attorneys work in networks. I'd just find a colleague or a network of attorneys specializing in that area and find out from them what is needed, rather than trusting that an LLM knows all that because it was digested from the entire public Internet.

I've said it here before too, I think people talking about using AI as an attorney don't really understand what attorneys do all day.


The value is, at least in my field, that checking for correctness is often a less tiring task than writing the code. With proper prompting, the LLM will also cover more corner cases than will a human think to cover. And to be honest, I really like the names of internal identifiers that LLM come up with. It's a skill that is notoriously lacking in software development.

Additionally, code review with the proper tools can be done relatively quickly and it's always a good idea to get a second opinion - even that of an LLM. I suppose that the human could write the code then ask the LLM for a code review - but that is not common practice.


That's great for people who write code!


As long as their service offering is better than me typing the question into my phone I’m fine with it.


You are failing to recognise all of the hard work which goes into "prompt engineering" to get AI to magically work!


It's the hardest work of all, it's basically voodoo, or put more generously, alchemy.

Thankfully we have progressed so this time it will probably take less than 1000 years to progress to full blown chemistry :-)


Its more and more of a science everyday


Yes, a more correct comparison would be early medicine: a science, but still filled with leeches and lancets.

Oh, and another thing, we still aren't able to quantify if AI coding is a net benefit. In my use cases the biggest benefit I get from it is as an enhanced code search, basically. Which has value, but I'm not sure it's a "$1 trillion per year business sector" kind of value.


> Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.

This is genuinely hilarious given what LLMs are.


Wait, so we cannot use the API anymore?


To scam people? No.


Indeed. Similarly, I like now having ChatGPT as the absolute lowest bar of service I’m willing to accept. If you aren’t a better therapist, lawyer or programmer than it then why would I think about paying you.


Having chatgpt as your therapist is actually wild.

Its yes-man abilities etc. will not help you and its always helpful to talk to another human being in the loop during therapy so that you know you are actually being sane.

Don't ever compromise money on your mental health.

If you need to, either join offline anonymous support groups or join some good forums/discord servers/reddit about therapy if you can not even afford that. They are good enough and I think personally for me, whats therapatic is to try to understand and help the other person who was struggling just as I once was y'know

But saying to use chatgpt as a therapist is just something that I hate a lot simply because in my opinion, it might actually even be harmful but I am not sure.


I didn’t say to use ChatGPT as a therapist——I said don’t use a human therapist who is worse than it.


The exact kind of customer I would not want to have anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: