[ home / bans / all ] [ qa / jp / sum ] [ maho ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]

/qa/ - Questions and Answers

Questions and Answers about QA

New Reply

Options
Comment
File
Whitelist Token
Spoiler
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "


[Return] [Bottom] [Catalog]

File:lust_provoking_ai_image.jpg (151.21 KB,832x1216)

 No.134238

Once AI becomes good enough to truly replace lawyers, would it be ethical to deploy them for the task?

 No.134239

in that case wouldn't we just automate the entire court process. Your robo cob submits paperwork to JudgeAI to issue an arrest warrant. Then as soon as your robo lawyer asks a few questions it submits them to the robo judge. Your verdict is then relayed to you after the GPU cluster computes the knowledge in 5 minutes where a jury of your simulated peers comes to a group decision

 No.134240

>>134239
Jury duty would be uploading your thought process to the jury cloud.

 No.134241

>>134239
Yeah, you would. But that's where I'm wondering if it's ethical or not, to automate the judicial process. It feels like having no human representation is wrong.

 No.134242

I don't think they will ever be good enough, because AI is not actually intelligent, it would be relying on an algorithm of previous court cases instead. That might work in some cases but in many it would not as many cases throw up their own unique aspects so you can't really just rely on precedent for all of them. Otherwise you might end up with cases where somebody is sentenced far harsher than they would be because the AI could not understand the circumstances of the crime or maybe far less than they would be because of the same reason.

 No.134243

real dumboob hours

 No.134244

>>134241
Ethical is debatable. From a legal standpoint yes AI would be good. Laws are a confusing mess of codes and rules and Lawyers are already soulless automatons, what difference would a literal machine make for reading and arguing on the basis of dense legal code?
Judgment is a different story. It's easy enough to judge things in black and white. Either they broke the law or they didn't. But judges are human and can sympathize. A machine would not question the law even if it was nonsensical and outdated, a judge could throw a case out because it's fucking stupid and a waste of time to prosecute.
A judge can also incorporate the intent of the defendant in whether wrong was done. For example: speeding in a school zone. You were going through at 50 mph and you got caught. A judge may be lenient and let you off with a warning if the reason you sped was a valid concern, like rushing to your sickly grandmother's house after she missed several calls.
AI cant do this, and it shouldn't do this. Shit's already loose enough it doesn't see anything wrong with ordering 1000 cheesy rolls at a drive through kiosk, it lacks common sense.

 No.134249

>>134244
Having an AI judge is a terrible idea, it will repeat whatever biases are in the training data. If, say people with French names got convicted more often (due to real-life reasons or bias), an AI judge trained on those outcomes would gladly judge someone guilty for having a French name. They're statistical inference machines and they don't care about causation, only correlation.




[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ qa / jp / sum ] [ maho ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]