I have sos v4.0 with huge quantity of data : at now 3882606 observations.
It seems that duration of import function is proportionally with number of data in db.
I think I understand from this forum that every import starts a reload of cache. This is correct?
Now with this data every import duration is about 40s. This is a huge limit!
> I have sos v4.0 with huge quantity of data : at now 3882606 observations.
> It seems that duration of import function is proportionally with number of
> data in db.
> I think I understand from this forum that every import starts a reload of
> cache. This is correct?
No, the SOS 4.x does not reload the cache from the database after data
insertion. The SOS updates the local cache directly with the information
from the request.
really thank for your support.
I have already tried to increase shared_buffer but I have got this:
This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 572907520 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections. If the request size is already small, it's possible that it is less than your kernel's SHMMAX parameter, in which case raising the request size or reconfiguring SHMMIN is called for. The PostgreSQL documentation contains more information about shared memory configuration
It seems there is a vm limit.
Have you any other hints? Actually : shared_buffers = 24MB
in the 52°North SOS version 4.0 the local cache has been updated and
persisted in a file during the processing of the transactional
operations. Perhaps this leads to longer processing time, depending on
the complexity of the metadata (number of procdures, offerings, ...).
Starting with version 4.1, the writing into the file is executed
asynchronously in an own thread. This has considerably improved the
performance of transactional operations.
Therefore, I recommend to use a newer version of SOS.
I have upgraded my sos to version 4.1
Now the import procedure has good performances!
The numbers are similar, 120 like before, but 120ms now! not seconds!
Thank for the support.
Normally I try to read the data by PgAdmin by the "view table" or a "select * from observation;" I mean a complete table view. It's very slow even if a use the "where" clause to filter data.
PgAdmin is in my computer and the SOS database is in a server on our local network that is fast.
52N webb-app is in a different server than database. The DB server has 32 GB of RAM and 4GB for shared_buffer parameter in postgresql.conf file, I think it's enough.
At the moment we are filling the database and we have:
39 observable properties
1,5 milion observations
But they are going to incrase a lot if we improve the performance of get obesrvations.
If I execute the get observation by the test-client I get this message:
" A script on this page may be busy, or it may have stopped responding. You can stop the script now, open the script in the debugger, or let the script continue.
I obtained that the browser doesn't crash but it's still slow i think, it need more than 2 minutes for one procedure with 3 offering and 8 observable properties. Is it normal? Or may be we have some problem in your opinion?
A select query on the DB by pgadmin for complete observation table (now about 3 milion of records) needs 5 minutes.
What do you think about?
Thank you very much.