currently the "ai" is selecting parts of the video which are not sufficient to tell the story of the full video. Currently the selected excerpts fall far short of usable for most people which is a tragedy as we are all buying and using for the ability to use the shorts. It would be better to, on the back end, have the Minvo team developers prompt much more thoroughly to test and ensure that the LLM is using the transcript to select ALL of the best parts of the video, or if its based on video then to use that algorithm as such. I consider this a bug with a relatively easy fix, just do better chain prompting (or open it up so the user can try to do it ourselves, that would actually be god-level better since we could make the short excerpt focus more detailed and nuanced to our video). This was pointed out in Dave Swift's review and it's not just me. Which is why he rated the other AS BF video clip tool as Super tier (A++) and Minvo was lumped into B. This really impacts usability when you test the tool or want to make a short and it gives you limited moments crap that isn't postable/sharable/usable. this should be priority #1! Thank you to the team for taking the core value proposition of this software seriously. pls lmk if any questions about how you would "open up the prompting" to the user. for example when starting the excerpt, you could have us fill in a blank "what do you believe are the key points in the video which would make a good short?".