Infinite scrolling is implemented in jerboa, it could definitely be brought to the web client.
Infinite scrolling is implemented in jerboa, it could definitely be brought to the web client.
I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.
Sounds like a NAS to me!
Consider charging at home, if you can. If your typical driving patterns consist of driving <100 miles from your home and it’s possible to plug in at home (a standard 120V outlet is sufficient typically), then you don’t need public charging stations. Just plug your car in at night and it’ll be full every morning.
So far it’s been good! Lemmy has made me hopeful for better social media. I’m not hugely into twitter-style social media so I was never really able to appreciate Mastadon.
I’m actually quite surprised with how much content is here already. There are regular posts and conversations, and a good mix of content. It’s not at the level reddit is in terms of volume, but I don’t feel starved or anything. I look forward to the future here!