Thanks for the dedication to test the new shiny tool. We need people like you who is skeptical of enterprise blog posts and test yourself using your own platform. Im sure they are happy with the feedback and more people are aware of the new integration.
I'd be interested in a comparison without any index. You don't always have the right index(es) at hand in all kinds of situations, and indexes come with some overhead too. There could be huge value in supporting analytical workloads without needing any index.
Thanks for trying! As we said, index is not supported, so to do a fair comparison of full scan, thats why we didnt create an index. In practice this is not really true, you would have index but again... they may not support your analytical queries and usually where created to support transaction workloads. We could have been more clear in the blog announcement - thats good feedback
Super interesting. So as I understand it - no columnar storage means duckdb has to do full table scan with all columns on each query, which makes it slow compared to the native postgres engine which can take advantage of indexes.
I wonder why 100M rows is so much slower than 50M rows for both engines.
Thanks for the dedication to test the new shiny tool. We need people like you who is skeptical of enterprise blog posts and test yourself using your own platform. Im sure they are happy with the feedback and more people are aware of the new integration.
Awesome article, thoroughly enjoyed it. Thank you
I'd be interested in a comparison without any index. You don't always have the right index(es) at hand in all kinds of situations, and indexes come with some overhead too. There could be huge value in supporting analytical workloads without needing any index.
Thanks for trying! As we said, index is not supported, so to do a fair comparison of full scan, thats why we didnt create an index. In practice this is not really true, you would have index but again... they may not support your analytical queries and usually where created to support transaction workloads. We could have been more clear in the blog announcement - thats good feedback
FYI - We've updated both the blog and YouTube video to better explain why we didn't use indexes. ☝️✅
Super interesting. So as I understand it - no columnar storage means duckdb has to do full table scan with all columns on each query, which makes it slow compared to the native postgres engine which can take advantage of indexes.
I wonder why 100M rows is so much slower than 50M rows for both engines.