Skip to content

Improve performance of date/time conversions #26

@kkieffer

Description

@kkieffer

As my database grows I've been noticing the performance starting to become an issue when compared with other database client implementations. I'm not sure if this is my code in particular or implementation differences.

Consider I have a table with 7 or so columns and about 6000 rows. The columns contain simple data: last name string, some integers, a timestamp, a few other strings. I want to retrieve the entire table - all rows and columns (so a simple SELECT statement). As I read each row from the query, the columns are parsed into their native types and stored in a list of objects.

I have a Java client using JDBC running on a laptop, for which this operation takes about 300 milliseconds. Roughly about 250 Kbytes are retrieved.

An iPhone and iPad on the same Wi-fi network running PostgresClientKit take about 8 seconds.

I don't think this is a network speed issue.
The laptop is faster of course, but hard to believe it's that much faster.
Both connections use SSL, possibly a performance issue in BlueSSLService.

Possibly the JDBC client is pre-fetching all rows before the cursor increments to them? Is there efficiency to be gained by trying to fetch all rows at once?

Could there be an underlying socket performance issue with blocking/non-blocking reads?

Any thoughts appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions