Why not AI?
I was not active in the forum when it became against the rules to use AI. To me, that is an insane decision that makes no sense at all. We can use inferior quotes, but not the best quote. What is the reasoning for this prejudice?
Comments (87)
"If you can't explain your idea to a six year old, you don't understand it yourself."
- (some dead guy)
Take a certain thread that one lone OP keeps posting on on the front page that has been told multiple times makes no sense yet only replies with insults. Stuff like that, is why.
I talk a lot about things I don't understand. Why would anyone come here if it were not a desire to have a better understanding? This is not just about what I have to say, but what others have to say. Do you want me to go away because I am not smart enough to be here. :gasp: That really hurts.
Quoting Outlander
I don't understand how clearer I can make that sentence.
The site owner doesn't seem to like AI taking the place of genuine, organic human discussion and discourse, despite its many imperfections and tendency to lead to less than productive exchanges.
Frankly I enjoy your presence here, as well as your posts and discussions. But that's nothing to do with the question you've asked.
AI LLMs are not to be used to write posts either in full or in part (unless there is some obvious reason to do so, e.g. an LLM discussion thread where use is explicitly declared). Those suspected of breaking this rule will receive a warning and potentially a ban.
AI LLMs may be used to proofread pre-written posts, but if this results in you being suspected of using them to write posts, that is a risk you run. We recommend that you do not use them at all.[/quote]
We can use AI to clarify/explore ideas to ourselves, it's just recommended that we don't use them at all.
There is a huge problem of trust with LLMs. If you ask a complicated question you don't know the answer to, you don't know whether the answer is complete bullshit. I asked for a summary of some Chapters of Cormac McCarthy's Blood Meridian and it gave me wrong garbage. It should of said upfront that it couldn't because it doesn't have access to the text.
LLM's are just more fuel for eroding trust in information in our post truth era.
I don't trust it when it comes to fiction, but I haven't seen it make a philosophical mistake in a long time.
Why do you think it's insane? It seems eminently reasonable to me. In fact I can't imagine a real philosophy forum that did not incorporate such a rule. It seems that all of them have.
Quoting Outlander
Yep. :up: :up:
Quoting 180 Proof
Would you say that those cognitive abilities have benefited from exposure to the intellectual stimulation and challenge provided by the ideas others offer on forums like this one?
I dont know that I can so easily distinguish the benefits of conversation with participants here and the conversations I have with an A.I. which I then incorporate into my contributions on this site. The concepts it exposes me to are not invented by a machine. The machine culls and parses knowledge and opinion produced by an enormous community of actual human beings. Of course, my conversation with such a community has its limitations. The machine can lie and hallucinate in its parsings, so I need to know to request sources and quotes I can verify.
And since the machine doesnt create its own point of view , I have to direct the conversation at every step, which keeps the challenge to my thinking at a more superficial level than is the case with a direct interchange with people. Still, I find the access it gives me to preliminary background information indispensable to the process of organizing my arguments, just as submitting a draft for peer review does. It doesnt make me lazy, or cause me to doubt my own cognitive abilities, any more than refreshing ones acquaintance with a topic through background reading or conversation does. It sharpens those skills. Which is why I sympathize with on this issue. Most contributors to this site will use A.I. in spite of the rule, since its easy to cover ones tracks. The rule is useful for reminding everyone that A.I. does lie, and more importantly, is not a substitute for presenting and arguing ones own thesis.
AI offers the best explanations
You want the best explanation for why AI can't be used here
Ergo, ask AI why you can't use AI here.
No. My "cognitive abilities" (seem to) benefit mostly from exercising them unaided (as much as possible) here and elsewhere.
That seems pretty serious to me. Lying??
I am old and have mental issues. Like many people my age, I often struggle to think of the word I want to say. I also use a walker. For me, telling me I can not use AI is like telling me I can not use my walker.
I interpreted your quote as saying people who struggle with memory and communication issues are not desired members of the forum. Desired members have excellent communication skills and know enough about the subject to explain it to a child. I can't even explain things to adults. I seriously doubt I meet the high standards you all want to keep.
-------------------------------------------------------------------------------------------------------------------------------
Quoting Hanover
Okay, here is what I got when I asked "AI why do forum owners reject you?"
.
There are more concerns. Frankly, I don't care about them because AI improves my ability to spread information. If I were a photographer and could use the best camera, would that make sense? When we get better tools we can do better and I need the better tool because my brain is not that good.
So, the rule does stand. That being said, it does appear you've responded to me without AI coherently and passionately, which means you will do just fine without sending us bot created messages.
Respect for the rule of law assures compliance.
Now that is an ethical issue isn't it? We keep our liberty by agreeing to obey the laws. That does not mean approving of the law. When we disagree with a law, it is our duty to explain why we are opposed to the law and do our best to get the law changed.
Effectively, Socrates gave his life for freedom of speech and the preservation of democracy. Finding fault with the democracy does not mean he thought something else was better. I think he wanted the people to do democracy better.
Besides I don't own the forum and maybe if I did, I would think it necessary to forbid the use of AI.
I don't know where you got that you need AI to present your case.
Surely my pathetic efforts to behave like an intelligent person should prove my posts are made by a human. I do not see how quoting AI is worse than quoting any other source of information; it is just faster, easier, and more efficient.
There is absolutely no reason for anyone to think I have the authority of an expert. Wikipedia and AI are accumulate information that is corrected when it needs to be corrected. So it is useful to use as a source of information to support our arguments.
And thanks for the confidence in my abilities. However, if you were having my experience, I don't think you would be so confident. I would not be surprised if a year from now, I couldn't even log in. If my life has any value, it is to increase understanding. I have a heart condition and I chose to do nothing about it, unless a medicine or pace pacemaker will improve the quality of my life. I do not want to lose my mind before I lose my life. This is not about just me; it is about getting old and senile. From my point of view, allowing people to use walkers and AI is a kindness because such aids extend the time a person can function and participate in meaningful activities.
Oh, indeed, you and I would never make use of AI...
I am making a case for everyone who needs aids to achieve what they want to achieve. If this was not about everyone, I would not have started a thread. When I studied gerontology at the U of O, the experts were asking if old people withdrew from society because that was their choice, or are the old people pushed out. I can answer that question with my personal experience.
It is a combination of both, not getting the job because of being too old, or not being able to do the job for physical reasons. The people I speak with agree that technology is closing us out, but some us are learning to use technology to our benefit. Senile people don't need to stop driving; they just need to learn how to use their cell phones and the GPS function.
I was never the smartest kid in the room, and if it weren't for Grammarly, my post would be impossible for me to understand. I relate to the people who have a hard time keeping up, and I hope attitudes regarding AI change. But at the same time, I suspect AI may be a serious threat. We fought a war to throw off the control of a king, and now some people want to turn everything over to AI. :scream: that is alarming.
Oh, oh I wonder how Descartes would handle this issue with his understanding of animals and humans being machines. What is the meaning of humanity if a machine rules over us?
I was responding to being told to ask AI. Now I am confused. You didn't mean for me to ask AI why I can't use AI?
You said--
Quoting Hanover
What I have done elsewhere is use AI to support what I have said. I want to make it clear, what I say is not a personal opinion but is factual. Occasionally, I ran into information that was very exciting to me, and I wanted to use it to open a discussion. That was against the rule in the other forum as well. Accepting I was not supposed to do that, played into my decision to stop participating in the forum.
I am confused by the ban having exceptions. How is anyone supposed to know the limits? And it just dawned on me, using Grammarly may be against the rules. I am screwed if that is so because I can't spell.
Here's an analogous case for you.
This is an English-language forum (almost but not quite entirely), but more than a few members are not native speakers of English. That means sometimes their grammar is a little off, or their diction is a bit surprising, and so on.
No one would ever suggest that if you are not a native speaker of English then you are not welcome here, and in fact most people are willing to overlook minor deviations from standard English, so long as the post is still intelligible. (Most people would be shocked to read a verbatim transcript of everyday speech. We're very good at ignoring deviation, and need only bring that skill to bear.)
More than that, I think most members here are keen to look past the surface of someone's writing and find the ideas being expressed, so if that surface is a bit rough, it's not really a big deal. The forum has rules about presentation that are intended (a) to keep us from looking like some lame social media site where ppl dont bother to spel n punctuate n stuff, and (b) so that posters make an effort not to place unnecessary interpretive burdens on their audience.
In short, mostly people here care what you think and cheerfully make allowances for less than sterling expression of those thoughts. Anyway, that's what I choose to believe this evening.
Which brings me to the main point about AI (or Wikipedia or SEP or IEP or what have you). The only important thing in anyone's post is their ideas, and that means their ideas. If all I post is information I get from elsewhere on the internet, I'm just a go-between; anyone could look up the same stuff I look up. so there's nothing about my post that's uniquely and irreplaceably me.
What people want from you here is what you think. If it's expressed at too great a length, with unnecessary detail, and much of that in parenthetical asides, as here, most readers are pretty forgiving, if annoyed. And if you use some bit of software to improve the presentation of your ideas a little, that's within bounds, so far as I can tell, because the important thing is that it is your ideas getting expressed.
The old tu quoque fallacy.
To be clear though, you can use AI, just not:
"AI LLMs are not to be used to write posts either in full or in part (unless there is some obvious reason to do so, e.g. an LLM discussion thread where use is explicitly declared). Those suspected of breaking this rule will receive a warning and potentially a ban.
AI LLMs may be used to proofread pre-written posts, but if this results in you being suspected of using them to write posts, that is a risk you run. We recommend that you do not use them at all."
You can interact with AI all you want, you just can't have it write your posts. I use AI all the time. Bounce ideas off it. Use its search engine feature. But whatever I post is in my own words and understanding, and I don't use it as source information, but locate whatever it says on some independent site.
Where it says you can't use it to write posts, it means literally as @Athena was suggesting. You can't just plug in info and have it spit you out a response. You can learn whatever you learn however you learn, and once learned, you can tell us what you learned.
This is no different than having your friend do your homework for you. If he explains you the topic, you read the book, you understand it, you do the assignment, you're fine. If he does it for you, then you cheated, and no one likes a cheater.
But you can't just say feel free to have your friend do your homework because it's impossible for your teacher to know, which is kinda what you did say.
It is a damn useful tool but not at all intelligent. If you are just copy and pasting AI text as your own post then it is almost certainly not expressing what you want to say perfectly. If you are using it as a fact checker it will require more effort than needed to just write and express your ideas yourself (as it makes errors without exact instructions).
:up: :up:
Quoting Hanover
:up: :up:
It's perfectly fine to bounce ideas off of it - but it shouldn't substitute your own thinking. Whatever we believe - a good portion of it - has been through effort and careful consideration of difficult topics. To have that be watered down by an algorithm will lead to lazy-to-no thinking.
As the saying (somewhat) goes LLM's can be a good servant, but a terrible master.
If those posts are of better quality than us humans here (and they probably would be), isn't human philosophical discussion a bit of mockery?
Is the teacher who brought about a pupil who rivals or even surpasses his or her own intelligence a failure?
I am not that old, but I have a memory problem. I have a very limited vocabulary, yet I am able to communicate with people. I don't use AI such as ChatGPT. I don't need it, and it does not help me when it comes to creating a new idea!
But aren't AI generated posts constructed out of recorded human expressions? It would be a complex kind of plagiarism.
It is hard to enforce, of course. We're mostly relying upon an honor system except when it's blatant because of that -- some people will be people and break the rules because they can get away with it, but for the most part it's discouraged because the point of the site is to think on your own in some manner.
Quoting 180 Proof
With respect to AI I'm fine with being a luddite. For many reasons.
Yes, people will use it. But if we see it's AI slop (which I'm sure people are aware of) then out. We encourage it because we're all probably luddites in this particular way too.
And:
Quoting Outlander
If I wanted to talk to an AI I could just go do that, but this is a forum for people to talk to one another.
I've slowly come to accept that this is the way the world is, but I don't like it. Perhaps it's because I'm prejudiced against AI.
But we're not there yet.
Right now LLMs give overly verbose and wishy-washy responses to open questions. I've seen several forums embrace AI responses only to later ban them. Because they just fill up threads with meaningless bloat.
They are also programmed not to contradict the prompter too forcefully. So peddlers of the most ludicrous conspiracy theories try to claim they now have a legit cite, merely because the AI was too polite to shut down their nonsense. So you would also need to firefight that stuff too.
:up:
I actually made the same point myself, on a forum oriented towards asking miscellaneous general questions on any topic. Forums like that are going to look pretty obsolete pretty soon.
That is a nice thing to say. I have a different perspective, but I am questioning myself and why I think it is important to argue the point. I will simply say that I love using AI explanations and wish everyone would.
I think starting a thread with an interesting AI and asking people to say what they think of what AI said, could be a lot of fun. I can not imagine what the problem would be. I just do not have the experience to know what can go wrong.
Well it's a gun that's right now configured to misfire because using the vanilla AI gives responses that don't fit well in discussion forums.
I guess a forum could endorse AI responses, but with specific rules that the prompt to the AI must include hints like "Please respond tersely, and don't be afraid to correct errors in the initial question".
But I think it's better to just wait for the tech to improve.
Also, I am not sure if I hinted it well with my previous posts, but I think it just leads to lazy behaviour. It's like when people just drop a link to a 2 hour video or something that they claim proves their point. Except, in the case of AI, it's text that bloats the thread itself until you get tired of opening the thread. It's kryptonite to good conversation.
Thanks, that is how I see AI, but I think my brain is becoming dysfunctional and never using AI is not going to make things better. But like using a walker, it could extend my ability to do what I want to do.
Quoting Moliere
That is a good idea, and if I could do that, I would not argue in favor of using AI. Hopefully, none of you will know I am talking about. This link explains the increased difficulty with learning, and I go to the gym several days a week, hoping that will slow the decline. https://www.nia.nih.gov/health/brain-health/how-aging-brain-affects-thinking
I can totally relate to the term, lazy brain. I have experienced my brain being lazy, and if I try harder, it gets worse. Stress will totally crash my thinking system. I have lived with that problem for many years. Sometimes, to cope with difficult moments, I tell people "I live in the now" and everyone laughs.. It is hard to know when normal senior moments are no longer normal.
It doesn't have a view - it doesn't care, it doesn't have insight. It's useful sure, but a person or people who know the subject matter will be more enriching.
AI can also entrench into beliefs which may otherwise not arise, or not as strongly.
Plus, all the words it is generating or based on words said by the finest minds in human history. So, we should be careful here when saying that AI can have better discussions.
It's another tool.
Quoting Athena
Lets say I start a thread with a quote from Plato. My readers will take the quote itself as some inert substance waiting to be molded into the OPs point of view. They will want see that the OP understands the quote, but more importantly they will want to see HOW the OP understands it, what they want to do with it, and how they will deal with reader critiques and disagreements. It is conceivable that the OP could instruct an A.I. to do all these things without the group knowing it. It is even conceivable that readers will learn from the ensuing discussion and may even find it as interesting as dealing with a real person. But most likely, if the creator of the OP is not tightly guiding the A.I. on the basis of a well thought-out direction of argument. , the result will appear superficial and not adequately responsive to participants concerns in the discussion.
Well, noone ever said you cannot discuss with AI and collect your thoughts and feelings. I think they mostly don't want you to ask a question and then just copy and paste direct from the AI. But, do be aware AI make mistakes too and could mislead you down a path of AI hallucinations.
In otherwords, you probably souldn't use it as an authority, but instead use it as a personal assistant.
Are we saying these Ai's then are like school children?
Quoting Athena
Find me any extended discussion on this forum without a point of view being argued , discussed and disagreed with. I dont think youll find one. Disagreement and questioning of a philosophical point of view does not in itself mean that proof of correctness is the goal. There are many other criteria on the basis of which to question a set of ideas, such as internal coherence, clarity, aesthetic quality, ethical value, pragmatic usefulness, etc.
Athena, I think you are misunderstanding how AI works. When you ask AI to respond to an argument, it is expressing its own opinion. Not your opinion. The forum wants discussion between humans, not between AI. Using AI to refine your posts, or correct spelling is fine, as it is still your opinion being expressed. But if AI writes the response, you aren't expressing your opinion; you are expressing the AI's opinion. AI is known to be very overconfident, making up information when it cannot find any on a subject. By posting an AI response, you are posting the opinion of an unempathetic, brainless, untrustworthy robot. The forum does not want this.
Quoting Baden
I'd say that what is inevitably going to happen (and is already beginning to happen on TPF), is that folks are going to appeal to LLMs as indisputable authorities. "You say X but my almighty LLM says ~X, therefore you are wrong." This will occur explicitly and also in various implicit ways.
Because this is an appeal to an LLM it doesn't directly contravene the rule. Nevertheless, I would argue that it is still remarkably contrary to the spirit of philosophy. It is that look-up-the-infallible-answer routine, which is quite foreign to philosophy (and is itself based on an extremely dubious epistemology).
I hope TPF will discourage this "look up the infallible LLM answer" approach, especially as it becomes more prevalent. The risk of such an approach is that humans become interpreters for AI, where they get all their ideas from AI but then rewrite the ideas in their own voice. Such a result would be tantamount to the same outcome that the current rule wishes to avoid.
(NB: The very fact that so many do not understand why a philosophy forum is intrinsically incompatible with AI-generated posts demonstrates how crucially important administrators and moderators are.)
Again: I use AI tools multiple times every day. I think they're great.
They just aren't appropriate for discussion forums yet.
If they could give short, succinct answers maybe it would be ok. But right now it's a lot of bloat.
And don't get me started on the current fad of YouTube channels doing whole episodes talking to an AI. They'll have a caption like "ChatGPT accepts proof of God!" but I'll watch maybe 10 minutes of flowery, evasive bilge before I give up and watch something else.
In fairness to open AI, it's not designed for YouTube debates. And it's not designed for discussion forums either.
Deep. :100:
As far as armchair science goes, this is top notch. :razz: :grin: :strong:
--
Also, to OP:
You and me are in the same boat as far as being overwhelmed with some of the stuff that gets posted here and the discussions brought about as a result. So don't even sweat that for a second.
I'd say a good 80% of topics here are over my head (at least my comfortable, casual level of confidence to have a debate in, if nothing else). I mostly enjoy reading the exchanges with the hope of learning something I didn't know before. You'll notice most of my posts on actual discussions are inquiries seeking clarification to a point or to bring attention to a possible fallacy in one's argument (which it's usually not but rather my own misunderstanding).
Something I'd say is true as far as this place goes is, you can always ask questions about something if you're genuinely curious about it, don't understand, or want to gain a better sense of understanding or insight. Most of the heavy hitters here are fairly nice and do reply to novice questions, even in the midst of heated discussions. Just be prepared for the obligatory "I don't really see how that's relevant", usually prefacing a detailed and simplified explanation as to why their point stands and how your concern does not invalidate or otherwise put their argument into question.
But aside from that, most people here are very charitable and understanding as far as their time and intellect goes into explaining things if you simply ask with polite inquisitiveness or curiosity.
No, I suppose not. :grin:
However, one might find value in the following analogy, be it "weak" or not. An AI or LLM is essentially a brain waiting to be trained (filled with knowledge). Consciousness in human beings is essentially a brain. Perhaps one may liken AI or LLM to a brain without a body. Schoolchildren have brains waiting to be filled with knowledge. So the two have at least that much in common, one might say? :confused:
All good points. My reply is just that the rule is unenforceable, given that it is already all but impossible to tell how much of a piece is constructed by AI, and that doing so will only become more difficult.
But I'm not the one enforcing the rule, so there's that. Doubtless many posts are already at least partly written by AI. Maybe you two have special skills.
I'm not in favour of a forum that consist in an exchange of posts written by AI. I don't have an answer.
Far and away the commonest mistake on these fora is for folk to think they have an answer when they don't.
.
I'm so grateful to be alive at this time, to be in the middle of this epochal event.
Well to be fair, one could likely point to any innovation (or at least, the localized introduction of an innovation) in any reasonable "generational period" of 50 years as something truly "revolutionary" and groundbreaking. In 50 years, assuming we haven't blown ourselves up yet and irradiated the world beyond repair, which is a risky bet all things considered, they'll be saying the same thing. Just as those 100 years ago said about the refrigerator. And those 100 years before that about the steam engine. And 100 years before that about the pistol or the first vaccine. And 100 years before that with the toilet. And so and and so on and blah. It just gets tiresome. Everything is amazing. Let's leave it at that.
Unfortunately, it's almost inevitable now that Al will become in the near future THE general authority. So, thinking will no longer be a practical necessity. We could even draw a logical line from human laziness to a situation where people simply plug their "personality" into a mobile AI, stick it on themselves, and allow it to do all their conversing for them.
Quoting Leontiskos
All we can do is be the change we want to see. I'd rather lose on argument than bluff my way through one. That's the beginning of outsourcing your personality. The end is human jello permanently plugged into AI-Tik Tok, gurgling its way happily to death.
Yes, yes, we do... None of which are helpful or even relevant, sadly.
The man with the golden touch who would be king in one world, would be but another lowly bricklayer in another whose streets are paved with such.
I very much agree.
Quoting Baden
Okay, fair. Still, I want to say that the canons for reasoning that we have developed as a species are reliable and recognizable. What is at stake now is a particular kind of appeal to authority: appeal to LLM. Our canons include sound principles for determining when an appeal to authority is permissible and when it is not, but such principles will be challenged by the advent of AI.
TPF already has a precedent for disallowing or at least discouraging certain sources for appeals to authority, particularly sources which are deemed morally inappropriate [hide="*"](e.g. Lionino's ban involved such a source if my memory serves)[/hide]. I would suggest that it is at least possible to establish a precedent for discouraging the "appeal to LLM" move, especially given the soft and flexible nature of TPF rules. And perhaps the current rule already does this to some extent.
(More specifically, I would say that our canons for reasoning generally require that the inferential steps used to reach a conclusion be made publicly available. A post which leverages an LLM in a way that is consistent with this principle would not be beyond the pale, given that such a post would not merely be appealing to the LLM as a blind authority. Yet a post which relies on an LLM in a way that is "blind" and inconsistent with this principle would be beyond the pale.)
It cannot think. It is just a tool.
An idiot using a hammer is still an idiot using a hammer. Destructive rather than constructive. Authority? Nope, none.
Sometimes it is impossible for me to be good and do the right thing, because there is another right thing that trumps the first right thing. This morning, in a thread about education, I used AI. But darn it! I feel passionately that we need to know some things if humanity is going to make the right decisions. The AI I used may not be 100% correct, and I said I disagree with one of the points. Nothing is going to be 100% perfect.
However, we can share social agreements such as the right to bear arms, but then we have to work on agreements about our behavior. Is there a right way to use AI? Can I prove I have social agreement when I am explaining a problem or solution? Because what I believe is the most important information is from very old books, my point of view is different from all others. Because what I say is different, it is assumed I am wrong and do not know what I am talking about. I feel pretty alone with this burden, and that makes AI useful in making my point.
AI isn't worse than any other piece of information. If you want to see really bad, bad information, go for a religious explanation. AI is not a false god. It is a useful tool. How we use it might matter but it will never trump religion for being problematic.
And even if I were speaking to God himself, I would have my own opinion, and I would tell him I think he did a few things wrong.
An idiot using a medical book is no better than an idiot using a hammer, but we aren't going to ban medical books, are we?
Something very frightening is happening today. People want more and more control, and this destroys not only our liberty but our ability to manage our liberty as well. I harp and harp about education, because it is essential to our safety and liberty. Ignoring the importance of education and trying to protect everyone by taking away their liberty is a dark cloud over us right now. I think we are experiencing how Dark Ages happen.
This is why I argue against education for technology. I think the world you want requires a liberal education. I have been alone with this argument for many years. I could die in peace if I were not the only one fighting for liberal education.
Oh please, AI does not have its own opinion because it does not have a personality. AI is a lot of information, and the machine can organize that information. It has a much broader source of information than any person can have. Therefore, it is more useful than asking your friends for information.
It is important to keep things in perspective. It is a tool and it must remain a tool. BUT WE CAN MAKE IT A TERRIBLE POWER OVER US. I believe education for technology has humans in a very dangerous position right now.
What is the rule about talking politics? :nerd: I know, I need to check with the ancient philosophers and see what they have to say bout making ourselves subservient to a machine. Oh, maybe that won't work. According to a professor I absolutely hate, information older than 10 years is useless, and so are the people who believe that crap.
This forum is much, much better than most. And like you, I am often overwhelmed by better-informed people. Curious, how people intent on thinking can also be the nicest people. Now we just need an education system that encourages this. I think some places are developing a better understanding of what the young need to learn. I have hope we will no more let AI rule over us than we would allow a pope or king to rule over us. What we do depends on how we are educated.
:smirk:
Quoting Baden
:up:
I.e. dogmatism (or superstition).
:clap: :lol: Welcome to the Matrix!
Quoting Athena
Funny thing about "liberal education" is those few with the most of it have always, in theory and practice, substantially denied it to the many who need it to help liberate themselves. Modern history shows that "liberal education" (as e.g. Jefferson / Paine / Marx suggest) is only a necessary, but not a sufficient, condition for liberty of the many.
We are mostly singing from the same hymn sheet then. But I think it's OK to educate kids in how to use technology if they understand its situatedness with regard to subjectivity. And that can start simply by telling them: This stuff is not just something you use, but that if you use it, will use you. Here's how...
Quoting Baden
Something is wrong with humanity if that happens. My experience with AI this morning was not that impressive. How it answers a question depends on how the question is worded. A person can get different answers by asking the same question differently. That can be like pulling back the curtain and exposing the Wizard of Oz.
However, the things AI can do, like create false pictures, are threatening. I have a friend who is working on high-tech computer chips, and I can hardly wait to visit with her and discuss the threats she sees. But I also see vacs deniers and other conspiracy theories, and I am shocked by what people believe. Like not only is the technology threatening, but people's willingness to believe lies is frightening. We need to work on our logic skills.
I am the one fighting to make it okay to use AI. Insisting people have liberal educations is not being anti-technology. However, a high IQ and being able to program computers or make mass destruction weapons does not equal wisdom, and we are the dumbest animal on the planet if we don't realize how important wisdom is. That requires a liberal education, and feeling responsible for what one knows and how one uses that knowledge.
I argue for liberal education as Christians argue for the Bible. Ever since the beginning of the US there has been a conflict between religious people and those who believe the Enlightenment is the most important source of knowledge.
I don't think we want to stop technology, but we need the wisdom to use it. Give us the liberty to use AI and work on our wisdom to use it well.
I think that the way you are using AI is okay to use.
Our antipathy isn't directed at what you've described what you use it for, at least. You're not copy-pasting directly out of it, and you're willing to hear other sources rather than rely upon it as a source -- it's a tool for seeing something you may not have heard about, but you're not parading it about like an authority.
Saying that I hope you don't feel like we're fighting you, while still answering your question as to "Why not AI?"
Old age is a double-edged sword. My brain isn't doing some things so well, but it is doing other things much better. What I like best is that I totally get that even the best of minds are very limited. Even the best authority is limited to a very tiny bit of information compared to all there is to know. Now I don't have the words for what I want to say, but I have a better understanding of the meaning. This is a moment when I have to Google for the words I am trying to remember. And here are the words I wanted to remember. "The More You Know The More You Realize You Don't Know".
:grin: That is a perfect explanation of why I want to use AI. The information is in my brain but I can't think of the right words.
Secondly, I am a fighter if the cause is important to me, and in this forum, everyone is so nice that there is nothing to be upset about. I keep thinking if I think of the right words to say, everyone will agree with my reasoning. :lol: Or maybe not. :lol: As long as everyone is nice about it, it doesn't really matter if we disagree.
Interesting. What must go with a liberal education to manifest liberty?
I came across the notion that those who want liberty have the least freedom because of being self-regulating. That is, we do not do whatever we feel like doing, but we choose to do the right thing, even when we don't want to. I have heard the way to protect our liberty is to obey the law. But sometimes the law is unjust, and then action must be taken to correct that wrong.
I am usually arguing the importance of a liberal education, and Jefferson would not deny anyone a liberal education. Jefferson held that universal education is the most effective means of preserving democracy and good government.
except for slaves, indentureds, girls and women ...
However, it's not a sufficient condition for robust liberty.
.
Wrong. AI does not know that water is wet. It does not know what wet is, and it does not know what water is. All it knows is what words usually go together.
And that is why it cannot do philosophy, which is the attempt to disentangle the muddles that words create using the world as template. Water is wet because feel it this is wetness. There is no logic to this; it is a demonstration. Here, you might need a towel. Oh, sorry, you seem to have blown a fuse and fried your circuits.
You don't think it will ever do philosophy on par with Nagels or Rawls or Chalmers?
Not even on a par with Ayn Rand, or Walt Disney.
AI is useful. In education AI can do great things for sure, as it can assess multiple students on a one-to-one level and pick out helpful routes for particular students with particular difficulties. A teacher has limited time resources.
Using AI to help you reframe your words for this forum is 'okay' I think, but I woudl go for your own attempts first and then try to rearticulate a few times before resorting to AI top interpret what you want to say. Otherwise you may start feeling like those guys in jobs where they have to use it to keep their jobs.
My next consideration is quality and AI is a better writer than I am. Finding an exciting explanation is like bird watching with others, when the intent is to identify and count the number of birds, only I am only looking for information, not birds. I think discriminating against AI is like discriminating against people who look different. The rationale is a rationale, but it is not good reasoning.
So no one here cares about my thread about the great depression, and that is easy to accept. However, it came up in a Google search, so someone or something thought that information was worth spreading. That could be good for this forum, as it could attract a new person or many new people. That would make me feel good, but how much work do I want to put into it? If I could use AI as a second person in the thread, it might be fun to see what I could do. But I am not going to write the whole thread by myself, replying to my own post on the chance that something good could come from that. There has to be at least one other voice other than mine.
I don't think you need to worry about what AI could do to my enjoyment of learning.
There are a lot of threads that seem to die out pretty quickly. It isn't so much that people don't care as they mayn not have the competence, knowledge or will to critique the subject and its responders. LLM/AI could make participation easier as it removes the friction/work of sharing analysis.
The prohibition here does make me feel bad about my own LLM use as if I'm doing something wrong. But I suppose it is less bad than losing myself in mindless entertainment, like video games. A lot of folks here critique popular culture, in an off handed way, as something awful, as if it was competing antagonist for their more ambitious aims (self edification through discourse). The forum no doubt plays an important social function for some people, which for others the consumption of media/books/television tries to fill.
We could look at LLM as a synergistic element, in the sense that two minds are better than one, when one mind cannot receive sufficient feedback from another. Of course it is wrong to consider LLM a mind, but insofar as it can produce the illusion of a mind as output, it can assist a real mind/person in trying to make sense of the world.
:joke: I'm very disappointed that forum users don't use horses instead of cars to get to work. They/we could learn something about hard work from the Amish.
Or all philosophical exchanges should be done by embodied oration, in a public forum.
:joke:
My point was that I was working to benefit the forum and needed a voice other than my own, and that other voice could be AI. You know, along the line of using a car to get to work might work better than a horse and buggy. I am not sure the decision to restrict the use of AI in the forum is the best rationale. However, we may all be concerned about our economy being tied to AI. This could get very interesting very fast. The book "The Mayan Factor" by Jose Arguelles predicts an economic collapse on our Path Beyond Technology.
This is the modern malaise most young people also understand, given the roulette wheels of fleeting pleasures available at our finger tips. If AI can help sustain attention/commitment to the working topic, to dig in rather than just glide over the surface and onto the next thing, it surely is useful. But as folks have said, is it just another modern crutch that makes us weak and dependent and not very good, logical thinkers.
Quoting Athena
Recently I saw headlines that ChatGPT data can be handed over to the police, as algorithms may detect those who are about to commit a crime. We're definitely living in a sci-fi novel now, much like the Minority Report, except the precogs (those that see us better than we see ourselves) are machine learning networks controlled by private businesses. The abuse of control over people lives, from power/wealth incentives, is worrisome, especially with the political climate now in the USA.
Also if these big data companies are reliant on advertising for their business model, and LLM search inquiries are bypassing advertisements; what do they stand to gain by doing that? When will LLM content start sneaking in advertisements to its free/base tier service, like all of the video/music streaming services have done, increasingly so.
I don't think that last line defines my experience. :lol: My living space is now cluttered with books related to discussions, especially Jack's thread about God. This is not the same clutter of books I had two weeks ago because of a thread I was doing in a history forum. But then I use a walker and I don't think it makes me weaker, because without it I would not go for walks and for sure my body would get worse. I think it is our motivation that determines how we use tools and aids. If a child were to read something said in AI and ask me about it, I would be delighted and avoid defining the technology as bad and harmful.
As for your next paragraph. The subject demands our attention and perhaps our action. I have been complaining about our lack of privacy ever since we went from laws protecting our privacy to employers and landlords wanting to know what we used to keep private. I think charging a fee to prevent advertising is extortion that should be against the law.
We went into education for technology, and dropped liberal education that prepared us for good moral judgment with the 1958 National Defense Education Act. One of the problems of relying on the Bible for good moral judgment is that the book does not help us with the present demands of moral decision-making.
We have so much to discuss. Has the change in education and development of technology put us in a precarious position?