Seconds since 1st January 1961 to date/time
Re: Seconds since 1st January 1961 to date/time
Thanks for all, I found something suitable here:
https://github.com/rofl0r/musl/blob/mas ... cs_to_tm.c
Just needs the mentioned correction by 283996800 to get the starting point right.
https://github.com/rofl0r/musl/blob/mas ... cs_to_tm.c
Just needs the mentioned correction by 283996800 to get the starting point right.
Re: Seconds since 1st January 1961 to date/time
Hi Dilwyn,
About 6 moths ago I experimented with ChatGP to write some assembler code for me. Superficially it looked good, but there were stupid mistakes like moveq #value,d1 when value had been defined as something like $2300! After much fiddling and fixing I got the code to run, but it just produced nonsense answers. After that I thought I'll just wait a bit for the next generation!
I guess in a few years we might get it to write some of those pesky drivers we need! but its not quite ready for that now. In the mean time, I enjoy writing my own code..
About 6 moths ago I experimented with ChatGP to write some assembler code for me. Superficially it looked good, but there were stupid mistakes like moveq #value,d1 when value had been defined as something like $2300! After much fiddling and fixing I got the code to run, but it just produced nonsense answers. After that I thought I'll just wait a bit for the next generation!
I guess in a few years we might get it to write some of those pesky drivers we need! but its not quite ready for that now. In the mean time, I enjoy writing my own code..
Per
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
-
- Font of All Knowledge
- Posts: 4669
- Joined: Mon Dec 20, 2010 11:40 am
- Location: Sunny Runcorn, Cheshire, UK
Re: Seconds since 1st January 1961 to date/time
...can AI write a QPTR assembler program?
Regards,
Derek
Derek
Re: Seconds since 1st January 1961 to date/time
dilwyn wrote: Wed Dec 11, 2024 12:41 pm <>
Do I get credit for the first QL SuperBASIC listing written by AI?![]()
<>Heres the same sort of thing done using HI:Of course non-SMSQ/Minerva HI programmers will have to work a little harder..Code: Select all
100 DIM D1(3), D2(3) 110 INPUT 'From date (DD/MM/YYYY)', D1(1)! D1(2)! D1(3) 120 INPUT 'To date (DD/MM/YYYY)', D2(1)! D2(2)! D2(3) 130 dt1 = DATE(D1(3), D1(2), D1(1), 0,0,0) 140 dt2 = DATE(D2(3), D2(2), D2(1), 0,0,0) 150 PRINT 'Delta days ='! (dt2 - dt1) DIV 86400
Per
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
Re: Seconds since 1st January 1961 to date/time
pjw wrote: Wed Dec 11, 2024 10:40 pmdilwyn wrote: Wed Dec 11, 2024 12:41 pm <>
Do I get credit for the first QL SuperBASIC listing written by AI?![]()
<>What is HI?Heres the same sort of thing done using HI:
Only mentioned the things I did because Peter wanted to avoid the QL routines (i.e. have to work harder as you said).
--
All things QL - https://dilwyn.theqlforum.com
All things QL - https://dilwyn.theqlforum.com
Re: Seconds since 1st January 1961 to date/time
dilwyn wrote: Wed Dec 11, 2024 10:56 pmpjw wrote: Wed Dec 11, 2024 10:40 pmHuman Intelligence (I guess Per'sdilwyn wrote: Wed Dec 11, 2024 12:41 pm <>
Do I get credit for the first QL SuperBASIC listing written by AI?![]()
<>
What is HI?
Only mentioned the things I did because Peter wanted to avoid the QL routines (i.e. have to work harder as you said).)
ʎɐqǝ ɯoɹɟ ǝq oʇ ƃuᴉoƃ ʇou sᴉ pɹɐoqʎǝʞ ʇxǝu ʎɯ 'ɹɐǝp ɥO
Re: Seconds since 1st January 1961 to date/time
Ah, of course.
--
All things QL - https://dilwyn.theqlforum.com
All things QL - https://dilwyn.theqlforum.com
Re: Seconds since 1st January 1961 to date/time
I just posted a reply. It seemed to vanish into cyberspace, so here goes again, although it may, magically, appear twice:
Youre right, tofro, by HI I mean Human Intelligence.
I spent 15 minutes trying to get the AI version to work. It was full of inconsistencies and mistakes. In the end I lost the thread (and interest). then I thought "How would I do it?" The result of the next 5 to 10 minutes is what I published above.
It is consistent with my experience with the assembler code I mentioned. It looks the ticket, and a lot of it seems right, probably the result of some mimicry rather than original "thought", but for now, at least its not good enough, whatever the AI evangelists would have us believe. Perhaps it can do better in more mainstream languages like C..
Youre right, tofro, by HI I mean Human Intelligence.
I spent 15 minutes trying to get the AI version to work. It was full of inconsistencies and mistakes. In the end I lost the thread (and interest). then I thought "How would I do it?" The result of the next 5 to 10 minutes is what I published above.
It is consistent with my experience with the assembler code I mentioned. It looks the ticket, and a lot of it seems right, probably the result of some mimicry rather than original "thought", but for now, at least its not good enough, whatever the AI evangelists would have us believe. Perhaps it can do better in more mainstream languages like C..
Per
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
I love long walks, especially when they are taken by people who annoy me.
- Fred Allen
Re: Seconds since 1st January 1961 to date/time
In theory AI should get better - if it's able to scan your result from this forum, you may find that if you were to ask it again, it may 'incorporate' your code.
In the last place I worked we noticed that chat GPT did this and provided better and better answers when it was given feedback to indicate previous answers may have been wrong.
In the last place I worked we noticed that chat GPT did this and provided better and better answers when it was given feedback to indicate previous answers may have been wrong.
Re: Seconds since 1st January 1961 to date/time
Well, the main function the AI executes is "understand what you want" and then "copy something from somewhere" that "does what you want". It doesn't assess the quality of stuff it copies in any ways other than from your feedback and the frequency of the same solution it finds in its "knowledge base" (which basically is "the inter web").Pr0f wrote: Thu Dec 12, 2024 10:15 am In theory AI should get better - if it's able to scan your result from this forum, you may find that if you were to ask it again, it may 'incorporate' your code.
In the last place I worked we noticed that chat GPT did this and provided better and better answers when it was given feedback to indicate previous answers may have been wrong.
If you don't provide good feedback, the answer can't be very good. The answer can#t be very good as well when the sample space for solutions to your problem isn't that large (which is the case for QL source code, as, measured by the size of the interwebs and the amount of "QL problems" solved there is likely a microscopic amount).
I did ask ChatGPT for a number of QL solutions and it did come up with total bullshit. Most often it didn't even know what SuperBASIC, QDOS, or QL mean, even started to fantasise about operating systems and programming languages that never existed. (Which is one of the other main problems I see with modern AIs: They never come back with "I don't know" but will simply present what they found, and most importantly, not coming up with some sort of "hit probability" for their solution). If you don't assess the answer (very) critically, your answer might be totally off.
As it is not really Dilwyn's code presented here I think I may criticize it in a bit more merciless way: The main issue that I have with the code is that it does the same thing every programming rookie does with date code: It chose to go with an iterative approach, which is dead slow and totally unnecessary. Most date/time problems can be solved without loops which is much faster. The more "real people" that present this non-optimal answer on the Internet, the more the AI will come up with the very same answer.
The premonition I have with "relying on AI": We'll repeat the same solutions over and over again with no innovation whatsoever. And the more we repeat something, the more the AI will find that solution and assume "must be right" based on the frequency, leading to a chewing of the same cud over and over again.
There is absolutely no "I" in modern "AIs". They are sophisticated language models that sort of understand what you're saying, then come up with an answer that more or less consists of the same pattern. That's it. We're tricked into believing the AI's through their perfect command of language, which triggers some sort of cognitive bias in humans:
Imagine you're presented with two solutions to a problem you don't have any previous knowledge on: One (which might be a brilliant one) is presented by someone who stumbles through their sentences - maybe because its not their native language - and in a maybe unstructured way , the other (which may be total bullshit) is presented perfectly in well-formed language. Whom will you believe? Its likely the perfect presenter, because you will silently put the presenter's command of the problem space on the same level as their command of the language.
Last edited by tofro on Thu Dec 12, 2024 11:24 am, edited 1 time in total.
ʎɐqǝ ɯoɹɟ ǝq oʇ ƃuᴉoƃ ʇou sᴉ pɹɐoqʎǝʞ ʇxǝu ʎɯ 'ɹɐǝp ɥO