Yes, there is room. Get the Stardust demo from GitHub
https://software.intel.com/en-us/articles/using-vulkan-graphics-api-to-render-a-cloud-of-animated-particles-in-stardust-application and install VisualStudio Community Edition and register to M$.
In the file settings.h you can change:
// Number of points per draw call
#define k_Def_Batch_Size 10
to a higher value, e.g. 500. Build and run and instead of ~80 fps I get 245 fps when the stardust cloud fills the whole screen and 538 fps when it almost collapses into 1/20th of the window area.


Thank you Intel for providing the code. It is a nice framework and uses SDL2.
Edit: as I could read in their documentation; the purpose of the demo is to generate an extreme amount of draw calls which could be individual objects. That is why they take 2.000.000 points and put only 10 of them in on batch to make it a draw call. The result is better distribution of draw calls to all cpus which was not possible in OpenGL before. It seems for me, that the value 10 is actually that low, resulting in 200.000 draw calls, that not the gpu, nor the cpu is the limiting factor then, but the memory bandwidth to the gpu. Each draw call of 10 batches has its overhead. That is why my fps is low at 80 with 10 points per call, and I get over 300 fps with 500 points per call but in both scenarios my CPU is only used to ~50%.