1. I get an error message when I try to synch: “Error 17: Authentication Failed, please contact the support”
You are trying to sync an instance that was restored on a new machine. You need to validate this instance on the sync server side. You need to contact the support to update your identifier on the sync server.
2. Why openerp processes still use a lot of memory even if nobody works in Unifield?
Unifield is coded with Python language. This language manages the memory usage as a pool of memory (a reserve of memory) When Unifield needs memory, Python will reserve the memory needed. At the end of the process, Python will keep this reserve for the next process.
Here is an example: If you confirm a PO of 100 lines, you may need 200Mo of memory. If Python already has a reserve of 200Mo nothing will change regarding the memory used. After this, if you confirm a PO of 300 lines, you may need 600Mo of memory. In this case Python will reserve 400Mo more. At the end Python will keep this reserve. Now if you confirm again a PO with 100 lines, Python has enough memory and nothing will change in the memory usage. There is a maximum level that Python can reserve and it is define in a conf file. Normally it’s about 80% of the total memory.
3. How to tune for performance if you use a SSD drive?
If your computer uses a SSD drive, you may follow this procedure to tune performance on PostgreSQL. You will find the procedure “Tuning PostgreSQL Server performance on SSD drive.pdf” in the ownCloud section of the UF IT system documentation here (procedure provided by OCB)
4. Why there is so many postgres.exe processes in the task manager?
For postgreSQL it’s close to what Python does for the memory. PostgreSQL use a pool of connection so when Unifield needs something from the database, it will use an existing connection if there is one free, if not, he will create a new one. So at the end even if you don’t work with Unifield, you will see several process ‘postgres.exe’ in the task manager.
There is also a maximum number of process defined in the conf file, for Unifield it’s 100.
This is an extract of an FAQ on postgreSQL:Why does PostgreSQL have so many processes, even when idle?
As noted in the answer above, PostgreSQL is process based, so it starts one postgres (or postgres.exe on Windows) instance per connection. The postmaster (which accepts connections and starts new postgres instances for them) is always running. In addition, PostgreSQL generally has one or more “helper” processes like the stats collector, background writer, autovacuum daemon, walsender, etc, all of which show up as “postgres” instances in most system monitoring tools. Despite the number of processes, they actually use very little in the way of real resources. See the next answer.
5. Why does PostgreSQL use so much memory?
Despite appearances, this is absolutely normal, and there’s actually nowhere near as much memory being used as tools like top or the Windows process monitor say PostgreSQL is using.
Tools like top and the Windows process monitor may show many postgres instances (see above), each of which appears to use a huge amount of memory. Often, when added up, the amount the postgres instances use is many times the amount of memory actually installed in the computer!
This is a consequence of how these tools report memory use. They generally don’t understand shared memory very well, and show it as if it was memory used individually and exclusively by each postgres instance. PostgreSQL uses a big chunk of shared memory to communicate between its backends and cache data. Because these tools count that shared memory block once per postgres instance instead of counting it once for all postgres instances, they massively over-estimate how much memory PostgreSQL is using.
Furthermore, many versions of these tools don’t report the entire shared memory block as being used by an individual instance immediately when it starts, but rather count the number of shared pages it has touched since starting. Over the lifetime of an instance, it will inevitably touch more and more of the shared memory until it has touched every page, so that its reported usage will gradually rise to include the entire shared memory block. This is frequently misinterpreted to be a memory leak; but it is no such thing, only a reporting artefact.