Will Lunacy 12 support local vLLM integration to restore AI features?
In the latest version of Lunacy (v12), the local AI deployment features seem to have been removed. For users with offline workflow requirements, this is a significant change. I was wondering if there are plans to support local vLLM (or other local LLM) integration in the future to reactivate these AI capabilities?
Tagged:
Comments
Most likely not. Current local models are too poor in quality to be used in the application, so for now we don’t see any point in supporting them.