Hacker Newsnew | past | comments | ask | show | jobs | submit | rhdunn's commentslogin

See e.g. https://www.youtube.com/@Switch-Angel/videos for examples of strudel in action to create trance music.

The following should be compatible with both approaches:

    .masonry {
      display: grid;
      display: grid-lanes;
      grid-template-columns: repeat(auto-fill, minmax(180px, 1fr));
      grid-template-rows: masonry;
    }
Firefox and browsers supporting the old syntax will ignore the `display: grid-lanes` as it doesn't recognize it and fall back to the grid+masonry.

Browsers supporting the new syntax will override the `display: grid` with `display: grid-lanes` and ignore the `grid-template-rows: masonry` syntax.


Using texts upto 1913 includes works like The Wizard of Oz (1900, with 8 other books upto 1913), two of the Anne of Green Gables books (1908 and 1909), etc. All of which read modern.

The Victorian era (1837-1901) covers works from Charles Dickens and the like which are still fairly modern. These would have been part of the initial training before the alignment to the 1900-cutoff texts which are largely modern in prose with the exception of some archaic language and the lack of technology, events, and language drift post that time period.

And, pulling in works from 1800-1850 you have works by the Bronte's and authors like Edgar Allan Poe who was influential in detective and horror fiction.

Note that other works around the time like Sherlock Holmes span both the initial training (pre-1900) and finetuning (post-1900).


upon digging into it , I learned the post-training chat phases is trained on prompts with chat gpt 5.x to make it more conversational. that explains both contemporary traits.

AI is a broad term going back to 1955. It covers many different techniques, algorithms, and topics. The first AI chess programs (DeepBlue, et. al.) were using tree search algorithms like alpha-beta pruning that are/were classified as AI techniques.

Machine translation is a research topic in AI because translating from one language to another is something humans are good at while computers are not traditionally.

More recently, the machine learning (ML) branch of AI has become synonymous with AI as have the various image models and LLMs built on different ML architectures.


We are still in the exploratory phase of what features are useful or not.

I could see describing images useful for blind or vision impaired people. Publishers often have a large back catalogue of documents where it is both impractical and too costly/time consuming to get all the images in those described with alt tags. This is one area where the publishers would be considering using AI.

Text-to-speech and speech recognition also fall under the category of AI and these have proven useful for blind/visually impaired people and for people with injuries that make it difficult to use a mouse and keyboard.

On the search side it would be interesting to see if running the user's query through an encoder and using that to help find the documents would help improve finding search results. This would work like current TF-IDF (Term Frequency, Inverse Document Frequency) techniques work.


Exploratory yes but when you're still exploring it's not a great idea to bet a whole company on something that might not pan out. Or to dump stuff on users with lots of promotion that may or not be actually useful.

And yeah for accessibiity a lot of this is hugely useful but that's a very niche usecase.

And yes AI-assisted search (or deep research, even better) is also something I like. But it's also because search engines themselves have become so enshittified and promote bad content first.


Plus simple caching to not redownload the same file/page multiple times.

It should also be easy to detect a forejo, gitea, or similar hosting site, locate the git URL and clone the repo.


If you are finetuning the model you need to replicate the training conditions so you don't remove those capabilities. If you just finetune a multi-modal model on text it will lose some of the vision capabilities as the text part of the model will drift from the vision, audio, etc. models. A similar thing happens with finetuning reasoning models.

Even if you did finetune the models with text and images then you could run into issues with using different descriptions for images to what it was trained with. Though you could probably work around that by getting the model to describe the images, but you'll still need to audit the results to correct any issues or add what you are training for.

You can also run into overfitting if your data does not include enough variations along a given training set that the original model had access to.

Using different training parameters could also affect the models capabilities. Just knowing things like the input context isn't enough.


This is the thing that kills me about SFT. It was sensible when most of the compute in a model was in pretraining and the RL was mostly for question answering. Now that RL is driving model capabilities it doesn't make much sense.

On the other hand, RL on deployed systems looks promising to essentially JIT optimize models. Experiments with model routers and agentic rag have shown good results.


This is very true. However, I wonder how much of this can be mitigated by using training data from other open-source models like Olmo3 for textual data, Emu3.5 for vision?

I have a ssh-switch script that runs `ssh-add -D` and `ssh-add $KEY_FILE` so I can do `ssh-switch id_github`, etc. This is coupled with a `/etc/profile.d/ssh-agent.sh` script to create a ssh agent for a terminal session.

The safetensors file format is a header length, JSON header, and serialized tensor weights. [1]

[1] https://github.com/huggingface/safetensors


The testable predictions would be at the places where QM and GR meet. Some examples:

1. interactions at the event horizon of a black hole -- could the theory describe Hawking radiation?

2. large elements -- these are where special relativity influences the electrons [1]

It's also possible (and worth checking) that a unified theory would provide explanations for phenomena and observed data we are ascribing to Dark Matter and Dark Energy.

I wonder if there are other phenomena such as effects on electronics (i.e. QM electrons) in GR environments (such as geostationary satellites). Or possibly things like testing the double slit experiment in those conditions.

[1] https://physics.stackexchange.com/questions/646114/why-do-re...


re 2: special relativity is not general relativity - large elements will not provide testable predictions for a theory of everything that combines general relativity and quantum mechanics.

re: "GR environments (such as geostationary satellites)" - a geostationary orbit (or any orbit) is not an environment to test the interaction of GR and QM - it is a place to test GR on its own, as geostationary satellites have done. In order to test a theory of everything, the gravity needs to be strong enough to not be negligible in comparison to quantum effects, i.e. black holes, neutron stars etc. your example (1) is therefore a much better answer than (2)


Re 2 I was wondering if there may be some GR effect as well, as the element's nucleus would have some effect on spacetime curvature and the electrons would be close to that mass and moving very fast.

For geostationary orbits I was thinking of things like how you need to use both special and general relativity for GPS when accounting for the time dilation between the satellite and the Earth (ground). I was wondering if similar things would apply at a quantum level for something QM related so that you would have both QM and GR at play.

So it may be better to have e.g. entangled particles with them placed/interacting in a way that GR effects come into play and measuring that effect.

But yes, devising tests for this would be hard. However, Einstein thought that we wouldn't be able to detect gravitational waves, so who knows what would be possible.


You don't need a full fledged theory of quantum gravity to describe Hawking radiation. Quantization of the gravitational field isn't relevant for that phenomenon. Similarly you don't need quantum gravity to describe large elements. Special relativity is already integrated into quantum field theory.

In some ways saying that we don't have a theory of quantum gravity is overblown. It is perfectly possible to quantize gravity in QFT the same way we quantize the electromagnetic field. This approach is applicable in almost all circumstances. But unlike in the case of QED, the equations blow up at high energies which implies that the theory breaks down in that regime. But the only places we know of where the energies are high enough that the quantization of the gravitational field would be relevant would be near the singularity of a black hole or right at the beginning of the Big Bang.


Can't black holes explain Dark Energy? Supposedly there was an experiment showing Black Holes are growing faster than expected. If this is because they are tied to the expansion of the universe (univ. expands -> mass grows), and that tie goes both ways (mass grows -> universe expands), boom, dark energy. I also think that inside the black holes they have their own universes which are expanding (and that we're probably inside one too). If this expansion exerts a pressure on the event horizon which transfers out, it still lines up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: