You are viewing a single comment's thread from:

RE: The performance cost of pl/pgsql exception block in Postgres

in #postgres11 months ago

The conclusion is far too premature. The overhead might be constant per call or constant per row being processed, or data dependent but in a different way than the main processing. You are doing two relatively heavy operations in the example: searching for value in jsonb and filling up table. What if you had trivial operation instead? What if you did even more complicated query? How about checking with half of the table rows? Or twice as many? Also I understand the turning off jit, but if you didn't do it, it might actually notice the exception can't be cast and optimize it out - you'd need to run the test with jit on and exception that can be cast depending on data (actually at least two tests - one that depends on indexed data, other with data not included in any index).

Alternative is to look into the postgres sources to see what actually happens when exception handling is added to the block 😁

Sort:  

I added the measurements for half and double data sizes.
I really didn't want to go to explore jit possibilities. I'm not very familiar with it and I was also more interested in plpgsql itself.