It feels like raw prompting is the assembly language of the LLM era, with more higher level abstractions to interface with the LLM for optimized results. Surprisingly I have not seen many startups in this space?
1. It is still not mature enough to make things really directional optimization. TextGrad is a mimic of differential optimization. But under the hood the loss is still not numeric and thus not really differentiable. See the code pointer: https://github.com/zou-group/textgrad/blob/main/textgrad/loss.py#L44-L52
2. People directly aim for AGI, which means the end user does not need to do prompt optimization at all. In the future, we can simple natural language, instead of polished natural language.
It feels like raw prompting is the assembly language of the LLM era, with more higher level abstractions to interface with the LLM for optimized results. Surprisingly I have not seen many startups in this space?
There could be various reasons.
1. It is still not mature enough to make things really directional optimization. TextGrad is a mimic of differential optimization. But under the hood the loss is still not numeric and thus not really differentiable. See the code pointer: https://github.com/zou-group/textgrad/blob/main/textgrad/loss.py#L44-L52
2. People directly aim for AGI, which means the end user does not need to do prompt optimization at all. In the future, we can simple natural language, instead of polished natural language.