I appreciate your concerns, and @sensologica‘s. I value having a critical community that voices critique and expresses their opinions. However, I believe that this stance is a rather privileged one and one that I personally cannot afford to have.
Essentially — if I understand you correctly — you are saying that for now it’s best to stay by the wayside and let the AI wave ebb away. But there is a counter argument to make to this point.
Since you mentioned that developing always involves political decisions — and I emphatically agree with this — here are the options we have to make a decision for Zettlr.
First: Do nothing, as you have suggested. If someone wants to integrate genAI into their workflows, users can already do so. Second: embrace genAI and integrate it into the app in useful ways, that we are discussing here.
The first decision means that we leave the genAI space to greedy corporations like OpenAI. But there are also many smaller startups that try to make a quick buck with the current AI wave. Users will see their advertisement, and use them. We can already see the fallout from this with students cheating on essay assignments and the powerlessness of instructors to reign this in.
The other option, however, would mean something similar to Apple: don’t just create roadkill after roadkill by unleashing underbaked technology onto the world, but think about proper ways of making genAI work for us. This would then have educational aspects, showing users how genAI can actually work beyond chatting with some cool chatbot.
I believe that I have the responsibility to make the latter decision if I want to be true to my values. In addition, by including access to Open approaches, not closed ones, we can support proper approaches (think of why I integrated Zettlr with LanguageTool, but not grammarly).
I believe that not integrating genAI into Zettlr would be a grave mistake. Other apps already feature this, but in often haphazard ways where genAI can actually cause harm. I aim to include genAI into Zettlr in a way that has educational aspects as well as supporting open AI and not leave the space to actors whom I believe do not attempt to mold genAI into an actual tool and remain in the sandbox stage.
Lastly, there is certainly a personal bias of mine in there. I work with generative AI every day, both professionally and personally. I observe my field as it embraces gen AI, and many researchers do use genAI in haphazard ways. But, journals are publishing their work, because they don’t see too much wrong with it. I believe the only way to make genAI good is by acting, not by waving from the docks as the ship sets sail.
Does this make sense to you…?