After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 565773 - symbol scanning is very slow
symbol scanning is very slow
Status: RESOLVED FIXED
Product: anjuta
Classification: Applications
Component: plugins: symbol-db
SVN TRUNK
Other Linux
: Normal normal
: ---
Assigned To: Massimo Cora'
Anjuta maintainers
Depends on: 565942 619682 623891
Blocks:
 
 
Reported: 2008-12-27 14:09 UTC by Adam Dingle
Modified: 2013-04-25 07:05 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
naba's idea of transaction (9.33 KB, patch)
2010-06-05 15:07 UTC, Massimo Cora'
none Details | Review
naba's idea with transaction commit every N symbols (1.47 KB, patch)
2010-06-06 22:28 UTC, Massimo Cora'
none Details | Review
callgrind output (302.46 KB, application/x-compressed-tar)
2010-06-22 23:15 UTC, Massimo Cora'
  Details
cachegrind output (266.71 KB, application/x-compressed-tar)
2010-06-23 21:27 UTC, Massimo Cora'
  Details
callgrind output 2 (307.39 KB, application/x-compressed-tar)
2010-07-27 12:14 UTC, Massimo Cora'
  Details
cachegrind output 2 (261.56 KB, application/x-bzip)
2010-07-27 12:15 UTC, Massimo Cora'
  Details

Description Adam Dingle 2008-12-27 14:09:50 UTC
When I first open a project and symbol-db is active, it scans the project's source files to build a symbol database.  This process is very slow.  I've done some experiments on my machine to compare symbol-db's scanning performance with

- the previous symbol-browser module
- the command-line ctags tool
- NetBeans (a Java IDE which also supports C/C++)
- Eclipse (a Java IDE which also support C/C++)

The times below are for scanning all symbols for the Anjuta project itself.

                   elapsed time         CPU time
symbol-db                  9:03             5:55
symbol-browser             0:10             0:08
ctags                      0:10             0:03
Eclipse                    0:28             0:26
NetBeans                   0:55             0:40

This shows that the symbol-db scanner is approximately 50 times slower than symbol-browser (which was just about as fast as ctags!), perhaps 20 times slower than Eclipse and 10 times slower than NetBeans.

Details: to measure symbol-db and symbol-browser times, I ran 'time anjuta' from the command line, opened the project, waited until symbol scanning was complete, then closed Anjuta.  For ctags, I ran 'time ctags -R anjuta'.  The Eclipse and NetBeans times are slightly more approximate: I used the GNOME System Monitor to watch the process CPU time while performing symbol scanning (NetBeans displays "Parsing..."; Eclipse displays "C/C++ Indexer...") and used a stopwatch to measure elapsed time.
Comment 1 Massimo Cora' 2008-12-28 14:19:10 UTC
(In reply to comment #0)
> When I first open a project and symbol-db is active, it scans the project's
> source files to build a symbol database.  This process is very slow.  I've done
> some experiments on my machine to compare symbol-db's scanning performance with
> 
> - the previous symbol-browser module
> - the command-line ctags tool
> - NetBeans (a Java IDE which also supports C/C++)
> - Eclipse (a Java IDE which also support C/C++)
> 
> The times below are for scanning all symbols for the Anjuta project itself.
> 
>                    elapsed time         CPU time
> symbol-db                  9:03             5:55
> symbol-browser             0:10             0:08
> ctags                      0:10             0:03
> Eclipse                    0:28             0:26
> NetBeans                   0:55             0:40
> 
> This shows that the symbol-db scanner is approximately 50 times slower than
> symbol-browser (which was just about as fast as ctags!), perhaps 20 times
> slower than Eclipse and 10 times slower than NetBeans.
> 

I don't care about them. They've a parser fixed on C/C++ and probably most code is hardcoded to skip pipe/files processes as we're doing.
Symbol-browser was built with that intentions in mind but we saw that having that architecture would have limited us to: 1. hack on ctags at every single release 2. language-support extension (python, java, vala, etc.) would definely have been much harder, if not impossible.
Symbol-db was created for another purpose. Naba himself claimed at the beginning that we would have paid some time for initial population, but after all that wouldn't have been so bad.
The population comes also in background, leaving you with an usable gui while population goes on.
More: I asked ctags-devs if they would have offered an option to have a ctags library which would have worked without the usage of files. Try to suppose the answer... an on-memory process/data exchange would have been subjected to bugs, crashes etc and so they would have continued with a file-based approach.

> Details: to measure symbol-db and symbol-browser times, I ran 'time anjuta'
> from the command line, opened the project, waited until symbol scanning was
> complete, then closed Anjuta.  For ctags, I ran 'time ctags -R anjuta'.  The
> Eclipse and NetBeans times are slightly more approximate: I used the GNOME
> System Monitor to watch the process CPU time while performing symbol scanning
> (NetBeans displays "Parsing..."; Eclipse displays "C/C++ Indexer...") and used
> a stopwatch to measure elapsed time.

well the measurements are not fair and precise (more: which processor/ram are you using as test machine?). When anjuta starts it does many other things you wouldn't expect.
Please try with the benchmark program located at plugins/symbol-db-/test/ 
to have a better understanding of the thing.
As I already said on mailing-list the slowness could also be a cause of libgda. We had many bugs (now mostly fixed) about slowness on that library.

To have a better understanding of the timings please have a look here.
http://bugzilla.gnome.org/show_bug.cgi?id=358479#c159

thanks and regards,
Massimo


Comment 2 Adam Dingle 2008-12-28 17:08:16 UTC
You asked about my test machine's processor and RAM.  I have an Intel Core 2 Duo CPU T5450 @ 1.66 Ghz with 2 Gb of RAM.  I'm running Ubuntu 8.10.  I doubt that matters much, though; I bet that the ratio of the speed of symbol-db to that of the other environments I tested will be similar on any modern Linux system.

You said that "When anjuta starts it does many other things you wouldn't expect."  Perhaps so, but the symbol-browser time above (0:10 total, 0:08 seconds of CPU time) is the *total* time observed for Anjuta to start up, scan all source files for the Anjuta project using symbol-browser, and exit.  Any extra time observed for symbol-db is due to symbol-db itself.

Comment 3 Massimo Cora' 2008-12-29 18:56:42 UTC
Does the thing get better with rev 4501?
Comment 4 Adam Dingle 2008-12-30 03:43:36 UTC
With the latest revision (4501) there's good news and bad news.  The good news is that time to scan symbols for the Anjuta project dropped by approximately a factor of 2:

                   elapsed time         CPU time
symbol-db                  4:16             5:48

Interestingly, the total CPU time used was *greater* than elapsed time; this happened because the scanning process was utilizing both of the CPUs on my machine, as I could see in the task monitor as the scan proceeded.

The bad news is that during the symbol scan Anjuta was completely unresponsive: I couldn't use any menus or editor functions, and the window failed to paint itself for seconds at a time.
Comment 5 Massimo Cora' 2008-12-30 19:37:53 UTC
(In reply to comment #4)
> With the latest revision (4501) there's good news and bad news.  The good news
> is that time to scan symbols for the Anjuta project dropped by approximately a
> factor of 2:
> 
>                    elapsed time         CPU time
> symbol-db                  4:16             5:48
> 
> Interestingly, the total CPU time used was *greater* than elapsed time; this
> happened because the scanning process was utilizing both of the CPUs on my
> machine, as I could see in the task monitor as the scan proceeded.
> 
> The bad news is that during the symbol scan Anjuta was completely unresponsive:
> I couldn't use any menus or editor functions, and the window failed to paint
> itself for seconds at a time.
> 

ok I've probably two good news: one is for the speed: with 4503 it should have enhanced a little more. For the responsiveness: it's not blocking anymore, at least here.
Can you please check with a full re-population of db? (rm .anjuta_sym_db.db and make install...).

Comment 6 Adam Dingle 2008-12-31 11:51:40 UTC
Okay - I tried a recent build (4506) and got the following times:

                   elapsed time         CPU time
symbol-db                  3:33             3:28

So, yes, this is significantly better - the elapsed time dropped by about 17%.  My machine was responsive during the scan, too.

It seems to me that the file scan has gotten faster, but the final phase ("Generating inheritances...") seems as slow as ever and is now taking up a lot of time, relatively speaking.  I haven't looked at the code, but perhaps there are some optimization opportunities there.
Comment 7 Massimo Cora' 2008-12-31 14:37:42 UTC
ok I've now added an index to the db.

to process about 9200 rows of inheritances it takes here 20 seconds, which is much less than before, I think.

can you please confirm this? (repopulation needed)

thanks

Comment 8 Adam Dingle 2008-12-31 15:01:35 UTC
Okay - I synced to revision 4512, rebuilt and scanned symbols for the Anjuta project again:

                   elapsed time         CPU time
symbol-db                  2:02             2:03

This is quite a bit better - seems like your most recent change helped a lot.
Comment 9 Massimo Cora' 2008-12-31 15:07:22 UTC
good, I'm closing the bug then, since I think we reached a good speed. 
If you encounter others slowness problems please tell me,

thanks.
Comment 10 Adam Dingle 2009-01-07 16:45:12 UTC
I just tried scanning the Anjuta sources in CodeLite (see codelite.org).  Here's the result:

                  elapsed time         CPU time
codelite                  0:50             0:36

Details: I built CodeLite with -O2.  In CodeLite, I chose Workspace / Create New Project, then created a custom-makefile project for Anjuta.  I then right-clicked on the Anjuta project in the Workspace view and chose "Import Files From Directory".  I unchecked "Import files without extension" and clicked OK.  I then measured the elapsed and CPU time while the window "Building tags database..." was active.

We've made impressive improvements to Anjuta's symbol scanning speed, but I disagree that we've reached "a good speed" since we're still more than twice as slow as CodeLite, which also uses SQLite to store its symbol database and which has symbol capabilities which at least as powerful as Anjuta's.  We can leave this bug closed if you like, but I think we should continue to optimize here.
Comment 11 Massimo Cora' 2009-01-07 17:49:12 UTC
(In reply to comment #10)
> I just tried scanning the Anjuta sources in CodeLite (see codelite.org). 
> Here's the result:
> 
>                   elapsed time         CPU time
> codelite                  0:50             0:36
> 
> Details: I built CodeLite with -O2.  In CodeLite, I chose Workspace / Create
> New Project, then created a custom-makefile project for Anjuta.  I then
> right-clicked on the Anjuta project in the Workspace view and chose "Import
> Files From Directory".  I unchecked "Import files without extension" and
> clicked OK.  I then measured the elapsed and CPU time while the window
> "Building tags database..." was active.
> 

did you also try to benchmark anjuta compiling it without maintainer-mode and without debug?
Libgda should be compiled in the same way without debug symbols.
How many files did you import using Codelite? 
symbol-db imports 927 files currently.

> We've made impressive improvements to Anjuta's symbol scanning speed, but I
> disagree that we've reached "a good speed" since we're still more than twice as
> slow as CodeLite, which also uses SQLite to store its symbol database and which
> has symbol capabilities which at least as powerful as Anjuta's.  We can leave
> this bug closed if you like, but I think we should continue to optimize here.
> 


I had a discussion a few weeks ago with Eran Ifrah (codelite's author) about the population methods he adopts.
He focuses only on C/C++ tags, and so he could hardcode some steps inside an indexer.
I report here the discussion if you'd like.

[..]
<eranif>	btw, I am now designing CodeLiteIndexer
<eranif>	I finished converting ctags into so/dll
<eranif>	so might be interested in that
<PeSc|O>	oh that would be great. You mean I'll be able to use it as a library?
<eranif>	Yes
<eranif>	I tested it, it works great 5x faster then spawning ctags process ...
<eranif>	there is one catch :(
<eranif>	:
<eranif>	if ctags crashes, your application crashes as well
<eranif>	and from the heavy tests I conducted, it does tend to crash
<PeSc|O>	wh0a, 5x faster?
<eranif>	this is why I creating codelite_indexer process, which will be using ctags as library, but is still an external process
<PeSc|O>	right now on anjuta I'm using ctags as server, as you're doing
<eranif>	however, instead of communicating to ctags using stdout/stdin, it will be using named pipes
<eranif>	in addition, the indexer, will write the results to the database directly and will simply return success/failure to codelite
<PeSc|O>	but passing files to ctags process, then saving the output to a file and then reprocess it with readtags.c libary
<PeSc|O>	seems to produce too much overhead
<eranif>	oh, I am using --filter=yes
<PeSc|O>	yeah me too
<eranif>	which keeps the process up
<eranif>	the new indexer, will use my properiatery protocol over unix-domain sockets
<eranif>	however, there will be no data exchange since the indexer will parse and will save the results to the database directly instead of sending them over to the calling application
<eranif>	this will reduce IPC
<eranif>	btw, i wrote my own code for parsing ctags output as well, which return the result in form of a tree
<PeSc|O>	well I have a different db structure, so I think I'll have to hack it up a little bit
<eranif>	so I dont need to hack/use readtags.c
<eranif>	my parsing code creates the tree as you would expect it: class:innder-class:methods etc
<eranif>	TagsManager::TreeFromTags()
<PeSc|O>	does it work only for c/c++ or for every ctags-supported language?
-->	jhs_s (n=jhs@dslb-088-065-123-210.pools.arcor-ip.net) has joined #codelite
<eranif>	havnt tried it
<eranif>	but I think it should work
<PeSc|O>	ok. My primary fear is that using an hardcoded scanning with ctags would end up in maintaining it for every release they do. We already had that problem
<eranif>	nothing to worry about, they dont release that often....
<eranif>	there is always trade off of performance, if you choose to use readtags, you are bind to write to files every file you parse
<eranif>	using my method, true I am more hard-coded, but faster
<PeSc|O>	yeah I know. This was what I suggested to ctags devs before starting with the current symbol-db approach, but they dropped the hint
<eranif>	my focus with codelite is C/C++ I am not trting to build multi-lang IDE
<PeSc|O>	if they at least created a 'standard' library to read tags without using files, it would be surely a quicker approach
<eranif>	there is a FR open for this ...
<eranif>	focusing on 1 language is more easy then trying to create multi-purpose IDE
<eranif>	I am not sure, but I think that Anjuta is trying to make everyone happy
<PeSc|O>	I see. Well we're trying to do the harder way
<PeSc|O>	yeah, at least for c/c++, java and python
<PeSc|O>	and probably more other languages
<eranif>	codelite already supports java - the build system is very dyanmic and allows to disable linker stage / compiler stage and other things
<eranif>	but it is a side effect of the design not on purpose

[..]


We've chosen a multi-language and multi-purpose approach. Ctags devs doesn't want to release a in-memory library to speed up things with tags, that was the solution I proposed first: without this or you hardcode/hack as Eran does or you keep a 10ms/symbol lower bound population time.

There are maybe some other ms to gain probably using the right indexes, but I think I'll do some other tests after the 2.26 release.


Comment 12 Adam Dingle 2009-01-07 21:44:19 UTC
> did you also try to benchmark anjuta compiling it without maintainer-mode and
without debug?
> Libgda should be compiled in the same way without debug symbols.

I've compiled Anjuta and libgda with -g -O2.  I highly doubt that the presence of debug symbols (enabled by -g) will have a noticeable effect on performance; with gcc it's the optimization level (-O2) that makes a difference.

> How many files did you import using Codelite? 
> symbol-db imports 927 files currently.

I'm importing all C and C++ source and header files.  There are 1015 of these:

$ find . -name '*.[ch]' -o  -name '*.cc' -o -name '*.cxx' | wc
   1015    1015   37943
$

Thanks for including the discussion with Eran.  Yes, it's too bad that the ctags developers don't want to release their project as a library that we can call in-process.  If (and only if!) the communication with the external ctags process is a significant bottleneck for us, then we should put the ctags code into our own library that we can call from our process directly.  As Eran points out, ctags releases are infrequent, so it shouldn't be a big deal to merge their changes into our sources once in a while.  We could even consider publishing the ctags library as a project independent of Anjuta so that other projects (such as CodeLite) could use it too.

I disagree with Eran's proposal to put the ctags indexer in a separate process that writes to the database independently; I think that's unnecessarily complex.  If ctags crashes sometimes, then we should find the causes of those crashes and fix them.  Anjuta calls dozens of external shared libraries, and we don't put each of these libraries into a separate process just in case it crashes; ctags should be no different.
Comment 13 Massimo Cora' 2009-01-07 22:09:03 UTC
(In reply to comment #12)
> 
> Thanks for including the discussion with Eran.  Yes, it's too bad that the
> ctags developers don't want to release their project as a library that we can
> call in-process.  If (and only if!) the communication with the external ctags
> process is a significant bottleneck for us, then we should put the ctags code
> into our own library that we can call from our process directly.  As Eran
> points out, ctags releases are infrequent, so it shouldn't be a big deal to
> merge their changes into our sources once in a while.  We could even consider
> publishing the ctags library as a project independent of Anjuta so that other
> projects (such as CodeLite) could use it too.
> 

well, this is the previous fashion used with symbol-browser, and as I already pointed out on comment #1 Naba and Johannes wanted to follow a different approach.
You pay a little more in population but you gain in performances when you search/operate with symbols.
I tried for instance to populate Anjuta's project with CodeLite as you suggested:

* it's faster than symbol-db, not that much however, and what it most important is that when it's populating you cannot use the GUI, which is bad, isn't it?

* I tried to use the completion with the "->" or "." scoping, the tooltip came out after 1, 1.5 seconds, which is really bad because I couldn't code in that way.
I think then that it's not that much scalable with large projects with a lot of symbols.

btw: beware that I can be wrong and I'm not saying that codelite is a bad ide, I'm just taking the pro and the cons. Take into consideration also that I'm not on an highend pc.

Comment 14 Johannes Schmid 2009-01-08 00:07:44 UTC
The ctags process doesn't seem to be the bottleneck, so I don't think we can gain much here.
One of the bottlenecks is the GObject code in libgda, but that makes the code a lot easier to handle.
Comment 15 Massimo Cora' 2009-01-08 09:21:51 UTC
(In reply to comment #14)
> The ctags process doesn't seem to be the bottleneck, so I don't think we can
> gain much here.
> One of the bottlenecks is the GObject code in libgda, but that makes the code a
> lot easier to handle.
> 

Well, considered that we're about at 10ms/symbol, even some ms can be a bottleneck. Maybe the flush on temporary file + parsing with readtags have its own cost. More on that: imagine that you could live without AnjutaLauncher and its output parsing, which adds more time to complete the operation. They are details, but they matters.
Comment 16 Adam Dingle 2009-01-09 15:08:17 UTC
I ran a quick profiling experiment using sysprof and I agree with Johannes: at the moment I don't think communication with ctags is a bottleneck, so I think that moving ctags in-process would probably not be a significant win now.

I looked at the symbol-db database schema.  I think that it's probably a mistake to have separate sym_type and symbol tables, since almost every symbol will have its own entry in sym_type.  At the moment, almost every time we find a symbol we have to perform two database insertions: one into sym_type and another into symbol.  We also pay the cost of linking these tables, since the symbol table needs a type_id column and an index on that column.  If we merge these tables by storing the sym_type columns in the symbol table, I suspect that this will speed up indexing significantly (and simplify our code as well, and probably make the database smaller too).

I also noticed that the symbol table has four indexes at the moment:

1. (name, file_defined_id, file_position) (a unique index)
2. (name, file_defined_id, type_id)
3. scope_id
4. type_id

Indexes 1 and 2 look suspiciously similar; perhaps we could combine them into a single index, which would further speed up symbol insertions.
Comment 17 Adam Dingle 2009-01-09 18:36:59 UTC
I'd like to reopen this bug, because I've become convinced we can still make significant performance gains here, perhaps even doubling our existing scanning speed.  And I that's well worth doing.  OK with you guys to reopen?
Comment 18 Massimo Cora' 2009-01-09 18:41:10 UTC
well, grouping sym_type with symbol is not good.
If you do that for sym_type you should do also for all the others sym_*, which is against a relational db model.
Having sym_type separated from the other tables will help in the symbol completion task with queries aimed only at that table.
More: indexes are really fast and shouldn't be a bottleneck too, so grouping more tables together won't gain that much.

The problem of population, and the part where we should try to improve, is the continuous switches between a select query and an insert one. I think I've the right solution for that, but it's some work on the engine core.
According to http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison inserting in a transaction many entries of a table is quite costless; a cache mechanism is necessary to proxy the sym_type (or scope) queries but I have to look deeper inside it.
I'll take that task after 2.26 because my time now is really decreased and it's not a work of two days.

I'm reopening the bug with low priority.

Comment 19 Naba Kumar 2010-04-11 15:19:09 UTC
Just stumbling across an old bug. I don't think ctags is the bottleneck here (anjuta 1.x used ctags parsers also and it was very fast, but without database).

Massio, if the inserts are slow, one thing you can possibly do is delete all indexes on start and add them after project scan is over. That could significantly increase population time. Also, your idea of batch (of several files) inserts could reduce some data transfer overheads too. Also, out of curiosity, you use transaction brackets to reduce sycing overheads, don't you?
Comment 20 Massimo Cora' 2010-04-11 22:38:03 UTC
(In reply to comment #19)
> Just stumbling across an old bug. I don't think ctags is the bottleneck here
> (anjuta 1.x used ctags parsers also and it was very fast, but without
> database).
> 
> Massio, if the inserts are slow, one thing you can possibly do is delete all
> indexes on start and add them after project scan is over. That could
> significantly increase population time. 

well, yes and no. While we're on population I constantly scan for symbols if they already exist or if there already are kinds or types defined.
See for instance the various sdb_engine_get_tuple_id_by_unique ().
Dropping indexes at startup would mean full table scans at nearly every query.

Also, your idea of batch (of several
> files) inserts could reduce some data transfer overheads too. 

yes. I'm constantly thinking about it and I'll implement it as soon as the dust on the core settles down a bit.
I was thinking about implementing a sort of cache layer in memory (hash tables), where the symbols are pre-inserted and where they can have faster lookups than the ones performed on db.
They'll be flushed on db when they reach some amount. The flush will be done as a batch mode.
This hopefully would push the time limit down.

Also, out of
> curiosity, you use transaction brackets to reduce sycing overheads, don't you?

Nope, I don't use transactions.
There was a time when I thought they could improve the thing but I was wrong.
The db is set with 
PRAGMA synchronous = OFF;
that adds another cache level to the symbols and avoid the forced synchronization with disk.

To add still more speed I'm planning to rework some mutexes, for instance the ones on symbol-db-engine-queries.c should be removed, preferring a producer/consumer on the memory pool of GValues.
Currenly the core inserts symbols in a serial way. I must see if they can be removed there too. I suppose it's more difficult.
Comment 21 Naba Kumar 2010-04-12 23:53:38 UTC
(In reply to comment #20)
> 
> well, yes and no. While we're on population I constantly scan for symbols if
> they already exist or if there already are kinds or types defined.
> See for instance the various sdb_engine_get_tuple_id_by_unique ().
> Dropping indexes at startup would mean full table scans at nearly every query.
> 
Perhaps, then you can leave those indexes while remove the onces not used during population? It could be some saving. Even the triggers could be disabled and their effects re-created at the end of scan.

> Also, your idea of batch (of several
> > files) inserts could reduce some data transfer overheads too. 
> 
> yes. I'm constantly thinking about it and I'll implement it as soon as the dust
> on the core settles down a bit.

Good.

> I was thinking about implementing a sort of cache layer in memory (hash
> tables), where the symbols are pre-inserted and where they can have faster
> lookups than the ones performed on db.
> They'll be flushed on db when they reach some amount. The flush will be done as
> a batch mode.
> This hopefully would push the time limit down.
> 
That's exactly what transaction brackets do, so I would not recommend re-inventing the wheel. That will make your code unnecessarily complex and hard to manage. sqlite DB is not necessarily any slower than C code (it's after all C code itself with DB manipulated potentially in memory, pretty much like your hash-table). I mean, an indexed lookup is no noticibly slower than lookup in C native code. The trouble comes from letting it to do lot of overheads (like updating/quering dozen tables in single insert etc.). If you can think of ways to implement it in C native data structures in faster way, I am pretty sure the same can be achieved in DB (even if you mean to use temporary tables).

> Nope, I don't use transactions.
> There was a time when I thought they could improve the thing but I was wrong.
> The db is set with 
> PRAGMA synchronous = OFF;
> that adds another cache level to the symbols and avoid the forced
> synchronization with disk.
> 
I thought the transaction has the advantage that the data is not committed to DB individually (wherever that is - memory or disk), which means it can avoid commit overheads per insert and possibly optimize bulk processing. But if you have measured it and didn't find any difference, I guess there is not much point to try it.

> To add still more speed I'm planning to rework some mutexes, for instance the
> ones on symbol-db-engine-queries.c should be removed, preferring a
> producer/consumer on the memory pool of GValues.

I actually forgot to mention it to you, but you should be using g_slice which does pretty much the same pooling you are doing (it's practically meant for such things). Also, if that's the only reason you are using mutexes, that will also go away. And, of course it will simplify symbol-db-engine-queries.* a lot.
Comment 22 Massimo Cora' 2010-04-20 22:24:21 UTC
(In reply to comment #21)
> Perhaps, then you can leave those indexes while remove the onces not used
> during population? It could be some saving. Even the triggers could be disabled
> and their effects re-created at the end of scan.

well, about the triggers they act only ad deletion time. From symbol table to file table. During the population the engine doesn't delete anything.

> 
> > I was thinking about implementing a sort of cache layer in memory (hash
> > tables), where the symbols are pre-inserted and where they can have faster
> > lookups than the ones performed on db.
> > They'll be flushed on db when they reach some amount. The flush will be done as
> > a batch mode.
> > This hopefully would push the time limit down.
> > 
> That's exactly what transaction brackets do, so I would not recommend
> re-inventing the wheel. That will make your code unnecessarily complex and hard
> to manage. sqlite DB is not necessarily any slower than C code (it's after all
> C code itself with DB manipulated potentially in memory, pretty much like your
> hash-table). I mean, an indexed lookup is no noticibly slower than lookup in C
> native code. The trouble comes from letting it to do lot of overheads (like
> updating/quering dozen tables in single insert etc.). If you can think of ways
> to implement it in C native data structures in faster way, I am pretty sure the
> same can be achieved in DB (even if you mean to use temporary tables).
> 

well, there's libgda in between engine and sqlite that slows down the things a bit.
Now that I recall to memory: transactions must be committed before their data can be queried, so that using transaction brackets like 'BEGIN TRANSACTION' and 'COMMIT' shouldn't be our case here.
That's why I'd like to have a querable 'table' before the insertion into db.
Here http://sqlite.org/faq.html point 19, they say that a single insert (as I'm doing) is a single transaction, which is slow.
With PRAGMA synchronous=OFF it's a little bit faster but we can do more.
I'd like to give a try to my idea. Maybe it won't work nor we won't gain speed, but if as they say, they can do 50k insertions per seconds then that's a flash speed.

> > 
> I thought the transaction has the advantage that the data is not committed to
> DB individually (wherever that is - memory or disk), which means it can avoid
> commit overheads per insert and possibly optimize bulk processing. But if you
> have measured it and didn't find any difference, I guess there is not much
> point to try it.

nope, at least on Oracle (which I'm using at work) you won't be able to see the transaction data until you've committed it. And I'm sure that on SQLite the thing is the same.

> 
> > To add still more speed I'm planning to rework some mutexes, for instance the
> > ones on symbol-db-engine-queries.c should be removed, preferring a
> > producer/consumer on the memory pool of GValues.
> 
> I actually forgot to mention it to you, but you should be using g_slice which
> does pretty much the same pooling you are doing (it's practically meant for
> such things). Also, if that's the only reason you are using mutexes, that will
> also go away. And, of course it will simplify symbol-db-engine-queries.* a lot.

yeah, good suggestion.
I used mutexes for two reasons:
1.  a single symbol must be inserted per time. It's the only way I found at that time to ensure that a symbol was correctly inserted into db.
2. strictly connected with 1. : at that time libgda supported just one thread per time (the main thead). Now, fortunately, it's no more a problem, and the thing can be parallelized being libgda thread safe.
Comment 23 Massimo Cora' 2010-05-15 17:24:15 UTC
First resul tests:

I separated the INSERT INTO table statements and grouped then into different files, one per table.
e.g.

INSERT INTO file (file_path, prj_id, lang_id, analyse_time) VALUES ('/home/pescio/gitroot/anjuta/plugins/search/search-replace_backend.c', 1, 1, datetime ('now', 'localtime'));
INSERT INTO file (file_path, prj_id, lang_id, analyse_time) VALUES ('/home/pescio/gitroot/anjuta/plugins/search/search-replace.h', 1, 1, datetime ('now', 'localtime'));
[...]

Then from cmd line I gave 

# create db
sqlite3 foo < tables.sql

# insert data (for file table).
sqlite3 foo < sql_file.log


#########################
timings:

$ time sqlite3 foo < tables.sql 
off

real	0m1.653s
user	0m0.009s
sys	0m0.007s


$ time sqlite3 foo < sql_file.log

real	1m12.119s
user	0m0.121s
sys	0m0.299s


after adding 
BEGIN TRANSACTION and COMMIT before and after all the statements, the timing 
drops to the sensational

$ time sqlite3 foo < sql_file.log

real	0m0.144s
user	0m0.034s
sys	0m0.004s



I tried to add the begin transaction and commit statements just after scan-begin and scan-end signals on the core, but the population slows at around the 100th file (population test was Anjuta's project).

Best thing I can think now is to group in some way the insert statements and launch them once inside a transaction.
Comment 24 Massimo Cora' 2010-05-15 17:30:51 UTC
using all the insert statements at once:

$ wc -l sql.log 
79611 sql.log

that's an huge number, but see this one:

$ time sqlite3 foo < sql.log
Error: near line 61011: columns name, file_defined_id, file_position are not unique

real	0m3.121s
user	0m2.763s
sys	0m0.089s


just 3 seconds!!
Comment 25 Johannes Schmid 2010-05-16 10:24:32 UTC
Hmm, I am not sure to get this completely right.

1) Are you sure the Error in your last test isn't affecting your results?

2) Maybe it would be clever to group all the files of one directory into one transaction.
Comment 26 Massimo Cora' 2010-05-17 18:09:56 UTC
(In reply to comment #25)
> Hmm, I am not sure to get this completely right.
> 
> 1) Are you sure the Error in your last test isn't affecting your results?
> 

yes it doesn't affect results. It's just an error for a record not inserted because of its uniqueness.

> 2) Maybe it would be clever to group all the files of one directory into one
> transaction.

yes, well, they're just tests to stress sqlite. It's not that important what you insert but the number of records and the time taken are.
Comment 27 Massimo Cora' 2010-05-19 22:43:15 UTC
I'm performing some tests with sqlite in-memory db http://www.sqlite.org/inmemorydb.html

population of anjuta's db is slower with in-memory db! This is absurd...

Some hints here on query performances:

http://katastrophos.net/andre/blog/2007/01/04/sqlite-performance-tuning-and-optimization-on-embedded-systems/
Comment 28 Naba Kumar 2010-05-20 22:27:38 UTC
(In reply to comment #27)
> 
> Some hints here on query performances:
> 
> http://katastrophos.net/andre/blog/2007/01/04/sqlite-performance-tuning-and-optimization-on-embedded-systems/

I think query performance in symbol-db currently is quite livable. We can further optimize it but I don't think there is any hurry to do that, so opportunistically improving them is fine.

The biggest problem now is with the population performance. So, you need to focus on bringing in transactions, one way or the other. It would be nice if you tell us what is your plan.

(In reply to comment #27)
> I'm performing some tests with sqlite in-memory db
> http://www.sqlite.org/inmemorydb.html
> 
> population of anjuta's db is slower with in-memory db! This is absurd...
> 
It's not very useful if you intend to have whole db in there. It may be mostly useful to you for temporary tables to pre-process intermediate data. As for the speed, I would force a guess transactions still apply for them.
Comment 29 Naba Kumar 2010-05-20 22:53:33 UTC
(In reply to comment #23)
> 
> I tried to add the begin transaction and commit statements just after
> scan-begin and scan-end signals on the core, but the population slows at around
> the 100th file (population test was Anjuta's project).
> 
I don't think it is nice to have transactions that big. It might have adverse effect depending on caching mechanism and journaling stuffs actually resulting in diminishing returns after a while (that may be what you are noticing). Besides, we lose persistency for a long while.

Transactions mostly avoid disc rotations and journaling file access, so after your data reaches a certain size, they would become insignificant. For example, after you collect 2 secs worth of data to write to disc, 1/60 secs of additional disc rotations to complete the transaction (from sqlite faq) is <1% of overhead. If you try to save that 1% further, you are trying to save 1sec from a 100sec population time :), at the risk of hitting other problems.  I think you need to find a sweet spot. For example, first try transactions around each files, or around each directory like Johannes suggested.
Comment 30 Massimo Cora' 2010-05-22 22:47:39 UTC
My last comments were just a diary where to report notes on executed tests.

Now I've got a cleaner vision of the big picture.
My plan would be:

1. remove __tmp_heritage_scope table.
2. optimize the biggest tables as sym_type and scope. Maybe symbol too.


explanations:
1. Right now I'm populating __tmp_heritage_scope with data that is collected scanning the various symbols. That table is just a temporary table, i.e. data is parked there and once the scanning is finished it's re-fetched and parsed at C side.
This was implemented like that because we wanted to be sure to re-start the scanning from where it was stopped (if anjuta was closed while scanning).
I don't agree anymore with that vision. I don't see why an user would close Anjuta if the scanning of symbols was *really* fast: he would just wait few more seconds for the scan to finish. Now the scan isn't 'fast' but just 'normal' because we take all that precautions. Does the user want to close Anjuta the same? Ok, the init population won't be completed and the next time it'll restart from 0. This would avoid breaking some inheritance references between tables.
On my tests we go from an average 0.0025 sec/symbol with current method to a 0.0023 sec/symbol. The gain is much more visible if the project has many C++ classes.
Then here it is my plan: avoid the "process --> C --> db --> C --> process" flow with a "process --> C --> process" flow. Maintaining the __tmp_heritage data in a queue would make the things faster.

2. Right now I'm using an 'insert first, check later' pattern to push data into sym_type and scope tables. Being the probability to have the same type or scope already present in table low (= few collisions) this is the best way to go.
And this works in a normal scan (i.e. user has a populated db and he's just editing files etc): he has no problem if a scan takes 10ms or 20ms.
But this has a performance problem on the really first insertion: all those queries to check if a type or scope already exists in table are *slower* than checking on a in-memory hash table.
This is my plan: map the table in memory (an hash table for the checks and a glist for the serial insertion). Here there's a little trick to be sure of: sym_type ids and scope ids are set to autoincrement. Tuples must then take care of this inserting the data correctly serialized (this is why we'd use a glist).
I don't have time data to compare this second step, but I feel like this is the right way.
Comment 31 Massimo Cora' 2010-05-22 22:52:53 UTC
oh well, I was forgetting: the step 2 would use transactions to speed up the things.
A single transaction would insert thousands of symbols in sym_type or scope etc.
The tests with the files was to proof this step.
The sdb_engine_add_new_sym_type () would have a flag to check if the insertion is the really first one or a 'normal' one.

to complete the analysis:
in-memory dbs are surely great things, but unfortunately we don't need them here. They're too slow.
Comment 32 Massimo Cora' 2010-06-02 23:07:01 UTC
http://git.gnome.org/browse/anjuta/commit/?id=63f2a14cb6fec4f0293d19beb6366c7f45e11301


Total performance gain: 56% on average symbol insertion. First population (before changes) was 0.0025 sec/sym. Now it reaches 0.0011 sec/sym. 

sym_type and scope tables are mapped on hashtable on first population.
Now the population flyes.
For global symbols I have to make sure that it doesn't fall back to 'normal' population after first package scanned, but shouldn't be difficult to do.
Comment 33 Naba Kumar 2010-06-04 21:50:26 UTC
(In reply to comment #32)
> 
> Total performance gain: 56% on average symbol insertion. First population
> (before changes) was 0.0025 sec/sym. Now it reaches 0.0011 sec/sym. 
> 
Awesome! That's a very good improvement.
Comment 34 Massimo Cora' 2010-06-04 23:53:06 UTC
(In reply to comment #33)
> (In reply to comment #32)
> > 
> > Total performance gain: 56% on average symbol insertion. First population
> > (before changes) was 0.0025 sec/sym. Now it reaches 0.0011 sec/sym. 
> > 
> Awesome! That's a very good improvement.

well, that's nothing compared to
http://git.gnome.org/browse/anjuta/commit/?id=efe42d331994f49acc2c5e7cb2496a60825db78d

you should really try it.
0.000085 sec/sym on average insertion. This kicks every previous 'record'. Best one was 0.0011 sec/sym. Population of entire Anjuta db (~1k files, 28k symbols) takes here around 10 seconds. Most of the time is spent while flushing on db the records, but I think that with a thread wrapper the gui freeze can be avoided. 

As from Malerba's suggestion:
"Another idea: you could use a GdaThreadWrapper object to execute all
the INSERTs in another thread wo you don't have to wait, see
http://library.gnome.org/devel/libgda/4.1/GdaThreadWrapper.html.
"

I think that GdaThreadWrapper can be used in queries too.
Comment 35 Naba Kumar 2010-06-05 10:08:46 UTC
(In reply to comment #34)
> well, that's nothing compared to
> http://git.gnome.org/browse/anjuta/commit/?id=efe42d331994f49acc2c5e7cb2496a60825db78d
> 
> you should really try it.
> 0.000085 sec/sym on average insertion. This kicks every previous 'record'. Best
> one was 0.0011 sec/sym. Population of entire Anjuta db (~1k files, 28k symbols)
> takes here around 10 seconds. Most of the time is spent while flushing on db
> the records, but I think that with a thread wrapper the gui freeze can be
> avoided. 
> 
That's an impressive improvement. Good to see that you can bring it down so low.

However, I am worried about the maintenance overhead brought by the hash tables. I am still maintaining my position that you don't really need them. Using transactions are sufficient to bring them down population time without involving memory cache (which transactions themselves are anyways, and likely implemented more efficiently).

Looking at your patch, it seems you are using the memory cache mainly to just avoid duplicate entries. Caching itself seems pointless. Can't you use just a simple {key}=>{boolean} hash table instead for that?

So, instead of your current:

symbols => [cache] => [transcation begin] => [flush] => [transaction end]

you do:

for batch of symbols
   [transation begin] [flush batch] [transaction end]
   [update duplicate check hash]
end

Essentially, like I already suggested in comment #21. Could you please give it a try this way?
Comment 36 Massimo Cora' 2010-06-05 12:20:47 UTC
(In reply to comment #35)
> 
> However, I am worried about the maintenance overhead brought by the hash
> tables. I am still maintaining my position that you don't really need them.
> Using transactions are sufficient to bring them down population time without
> involving memory cache (which transactions themselves are anyways, and likely
> implemented more efficiently).
> 

well, I don't see problems in maintaining hashtables. They're used to boost the first population, and the fields are mapped using the fields in tables.sql.

> Looking at your patch, it seems you are using the memory cache mainly to just
> avoid duplicate entries. Caching itself seems pointless. Can't you use just a
> simple {key}=>{boolean} hash table instead for that?

that's the point!
The check of duplicate entries is terribly slow using libgda+sqlite. It'll use indexes ok, but they're anyway too slow because they'll be refreshed at every insertion·

> 
> So, instead of your current:
> 
> symbols => [cache] => [transcation begin] => [flush] => [transaction end]
> 
> you do:
> 
> for batch of symbols
>    [transation begin] [flush batch] [transaction end]
>    [update duplicate check hash]
> end
> 
> Essentially, like I already suggested in comment #21. Could you please give it
> a try this way?

I can try if you really want, but I'm sure that we cannot improve performances.
I'll try this way:
* maintaining tablemaps on sym_type and scope. This is necessary because lookup using libgda is slow and because the probability of duplicate entries is high.
* I'll remove on 1st population the indexes on table symbol, initialize a transaction, insert everything, then close. There shouldn't be duplicate entries I suppose. symbol tablemap would then be killed.
This is the only way that I immagine your solution. Would you also like to kill tablemaps on sym_type and scope?
Comment 37 Massimo Cora' 2010-06-05 15:07:47 UTC
Created attachment 162809 [details] [review]
naba's idea of transaction

Apply this one after commit 
http://git.gnome.org/browse/anjuta/commit/?id=efe42d331994f49acc2c5e7cb2496a60825db78d

This way is 20 seconds slower than symbol tablemap.
I've added a DEBUG_PRINT for the total time taken for the first population.
It says 35.34

With my method instead it takes 15.93 seconds, flushing included.

Anyway can you please have a look at it to see if I implemented the way you wanted?
thanks
Comment 38 Massimo Cora' 2010-06-05 15:32:48 UTC
Some more numbers (on my idea):

+ removing indexes from symbol table would only drop 0.5 seconds from flush.
- the odd thing is that recreating them takes a while and if not created the second step processing'll take ages to complete, due to full scans.

* hashtable check is necessary because we must be 100% sure that the ids we're flushing are really the ones calculated on hashtable, without dups. This because the "PRIMARY KEY AUTOINCREMENT" would automatically map them 1 to 1 when inserting.
Comment 39 Naba Kumar 2010-06-06 06:34:38 UTC
(In reply to comment #36)
> 
> well, I don't see problems in maintaining hashtables. They're used to boost the
> first population, and the fields are mapped using the fields in tables.sql.
> 
In proper usage, there should be nothing between your application data and database. That's the whole point of using database. If you put your own data structure storage in the middle, that's almost beating the point of using database. Because otherwise you are re-inventing part of the wheel that DB guys have probably done better than you. This is indeed a problem.

If you can not optimize your implementation just by using DB directly, then either you don't need database because it doesn't serve/fulfill your purpose, or you are using it incorrectly. In 99% of the cases, it's the later. In remaining 1% cases, you need highly customized storage implementation.

> > Looking at your patch, it seems you are using the memory cache mainly to just
> > avoid duplicate entries. Caching itself seems pointless. Can't you use just a
> > simple {key}=>{boolean} hash table instead for that?
> 
> that's the point!
> The check of duplicate entries is terribly slow using libgda+sqlite. It'll use
> indexes ok, but they're anyway too slow because they'll be refreshed at every
> insertion·
> 
If that's the point, then you can use a simple hashtable {key => boolean} to avoid duplicates. You can also use {key => int} if you want to track auto increment IDs like you do now (but see my comment below about it about autoincrement caveats). The rest is just bogus and redundant. If the hits are not that so many, you can even just insert them and DB will fail those symbols anyways due to unique constrain.

> I can try if you really want, but I'm sure that we cannot improve performances.

Performance is always in trade-off with price. The price being time it takes to optimize, maintenance easiness, code clarity etc. The spectrum lies between a custom DB written in asm code to absolutely unoptimized DB usage. We have to find a sweet spot, which is usually a range rather than a point.

So don't get blinded by just the millisecs or microsecs it saves for you yet. If you introduce bad code to save that millisec, it's often not worth it. For Anjuta, whether it takes 15s or 10s is not the ultimate call. If we get 15s with good and maintainable code, and 10s with bad code, I prefer former.

Performance optimization happens in stages. First stage is so called low hanging fruits. Do that and see if that's acceptable. Then enter so called micro optimization. There you start playing with trade-offs between hacks and gains. Until the result is acceptable. In practice, you can split it into many stages, depending on your code.

In your case, the low hanging fruit was forgetting to use transactions. Instead of starting with that, you jumped directly to micro-optimization where hashtable access becomes a matter of survival.

Please try to understand me. I am not trying to criticize your efforts. On the contrary, I am trying to help you to get it done properly. You have done great so far, but there are things still can be done better. I am in the middle of query clean up ATM, otherwise I would have liked to help you personally with it. Until then, all I can do you is tell you stories with armchair programming (forgive give me for that :)).

> I'll try this way:
> * maintaining tablemaps on sym_type and scope. This is necessary because lookup
> using libgda is slow and because the probability of duplicate entries is high.

Let's do it in steps. So, yeah start by getting rid of symbols memory table first. We can deal with the above after that (or may be together). I am sure there are other ways to solve it.

> * I'll remove on 1st population the indexes on table symbol, initialize a
> transaction, insert everything, then close. There shouldn't be duplicate
> entries I suppose. symbol tablemap would then be killed.
> This is the only way that I immagine your solution.

(In reply to comment #37)
> Created an attachment (id=162809) [details] [review]
> naba's idea of transaction
> 
> Apply this one after commit 
> http://git.gnome.org/browse/anjuta/commit/?id=efe42d331994f49acc2c5e7cb2496a60825db78d
> 
> This way is 20 seconds slower than symbol tablemap.
> I've added a DEBUG_PRINT for the total time taken for the first population.
> It says 35.34
> 
In the above patch, I didn't see where you start transactions, so I assume you are making one massive transaction. That is not what I meant. Make the transactions in batch of reasonable size, for example by keeping a counter of symbols added in a transaction and flushing when you reach N symbols, if nothing else. N being experimentally optimal. Originally, I thought you could bracket it around directory, but I suppose N symbols could is equally effective and less work. Also, get rid of symbol memory hash table completely too, so that we don't count in its overhead too. Then let's see how it does.

> With my method instead it takes 15.93 seconds, flushing included.
> 
How long does current implementation (before your method) it takes?

(In reply to comment #38)
> Some more numbers (on my idea):
> 
> + removing indexes from symbol table would only drop 0.5 seconds from flush.
> - the odd thing is that recreating them takes a while and if not created the
> second step processing'll take ages to complete, due to full scans.
> 
Let's not touch the indexes yet.

> * hashtable check is necessary because we must be 100% sure that the ids we're
> flushing are really the ones calculated on hashtable, without dups. This
> because the "PRIMARY KEY AUTOINCREMENT" would automatically map them 1 to 1
> when inserting.

I am sure I read somewhere the autoincrements are not guaranteed to be sequential. So, keeping track of DB ids by a variable incremented is very unstable hack. one step where it's wrong and everything afterwards is wrong. You might want to check it out.

> Would you also like to kill tablemaps on sym_type and scope?

I would like to. Is there any particular reason for sym_type and scope to exist the way they are now? "scope" table seems just a way to generate persistent integer quarks. Perhaps, there is a better way to do it, say hay hashing, and get rid of it all together? "sym_type" looks a bit hairy too.

In the end, if you have trouble getting those tables synced properly, just have them all together in symbol table. Perhaps that would make this thing easier, at a little cost of DB size.
Comment 40 Massimo Cora' 2010-06-06 21:53:42 UTC
(In reply to comment #39)
> In proper usage, there should be nothing between your application data and
> database. That's the whole point of using database. If you put your own data
> structure storage in the middle, that's almost beating the point of using
> database. Because otherwise you are re-inventing part of the wheel that DB guys
> have probably done better than you. This is indeed a problem.

in theory this is true, and in our case it would be much more true if we weren't using an extra layer as libgda.
If we were using only pure sqlite calls then I would consider as a mad thing to create our own structs just to flush them later. And, again, I would be 100% with your vision.
But there's a but: IMHO libgda isn't fast enought, and a too much generic layer to permit very performant accesses to db as we need.


> 
> If you can not optimize your implementation just by using DB directly, then
> either you don't need database because it doesn't serve/fulfill your purpose,
> or you are using it incorrectly. In 99% of the cases, it's the later. In
> remaining 1% cases, you need highly customized storage implementation.

true. But it's also true that we need the initial boost just once. All the other cases are normal ones.

> > 
> If that's the point, then you can use a simple hashtable {key => boolean} to
> avoid duplicates. You can also use {key => int} if you want to track auto
> increment IDs like you do now (but see my comment below about it about
> autoincrement caveats). The rest is just bogus and redundant. If the hits are
> not that so many, you can even just insert them and DB will fail those symbols
> anyways due to unique constrain.
> 

if the insertions'll fail then id sequences will be broken, which is not what we want.

> > I can try if you really want, but I'm sure that we cannot improve performances.
> 
> Performance is always in trade-off with price. The price being time it takes to
> optimize, maintenance easiness, code clarity etc. The spectrum lies between a
> custom DB written in asm code to absolutely unoptimized DB usage. We have to
> find a sweet spot, which is usually a range rather than a point.
> 
> So don't get blinded by just the millisecs or microsecs it saves for you yet.
> If you introduce bad code to save that millisec, it's often not worth it. For
> Anjuta, whether it takes 15s or 10s is not the ultimate call. If we get 15s
> with good and maintainable code, and 10s with bad code, I prefer former.

if they're 5 seconds ok, but they can become 20 or more (as we've seen), which IMHO is a whole different thing.
If we don't want to push down that limit then we can just let the thing as they were before the tablemaps hacks: the code is (more or less) "well" written, and db is used as your're pointing out above.

> 
> Performance optimization happens in stages. First stage is so called low
> hanging fruits. Do that and see if that's acceptable. Then enter so called
> micro optimization. There you start playing with trade-offs between hacks and
> gains. Until the result is acceptable. In practice, you can split it into many
> stages, depending on your code.
> 
> In your case, the low hanging fruit was forgetting to use transactions. Instead
> of starting with that, you jumped directly to micro-optimization where
> hashtable access becomes a matter of survival.

true, but I did that for an important reason. Transactions are fast and useful when, for instance, you do a lot of equals operations in a row.
A lot of inserts (and only inserts) will be a lot faster then of an insert, followed by a select, followed by another insert and so on, which would
be the idea you're supporting. At db level, the continue switching between select and insert matters.

> 
> Please try to understand me. I am not trying to criticize your efforts. On the
> contrary, I am trying to help you to get it done properly. You have done great
> so far, but there are things still can be done better. I am in the middle of
> query clean up ATM, otherwise I would have liked to help you personally with
> it. Until then, all I can do you is tell you stories with armchair programming
> (forgive give me for that :)).

I appreciate a lot your efforts, we're discussing whether a method is better than another. And that's great, believe me.


> 
> > I'll try this way:
> > * maintaining tablemaps on sym_type and scope. This is necessary because lookup
> > using libgda is slow and because the probability of duplicate entries is high.
> 
> Let's do it in steps. So, yeah start by getting rid of symbols memory table
> first. We can deal with the above after that (or may be together). I am sure
> there are other ways to solve it.
> 

Unfortunately I don't see any other way to avoid the switching between select and insert.
More: I looked at libgda 4.1.x docs, and I didn't see a way to specify multiple transactions on the same connection at the same time.
That would be a problem because, having a transaction for scope, one for sym_type, one for symbol we should have three connections to the same db:
that would the same bring complex code.

> > * I'll remove on 1st population the indexes on table symbol, initialize a
> > transaction, insert everything, then close. There shouldn't be duplicate
> > entries I suppose. symbol tablemap would then be killed.
> > This is the only way that I immagine your solution.
> 
> (In reply to comment #37)
> > Created an attachment (id=162809) [details] [review] [details] [review]
> > naba's idea of transaction
> > 
> > Apply this one after commit 
> > http://git.gnome.org/browse/anjuta/commit/?id=efe42d331994f49acc2c5e7cb2496a60825db78d
> > 
> > This way is 20 seconds slower than symbol tablemap.
> > I've added a DEBUG_PRINT for the total time taken for the first population.
> > It says 35.34
> > 
> In the above patch, I didn't see where you start transactions, so I assume you
> are making one massive transaction. 

nope, I'm starting transaction on sdb_engine_add_new_symbol () method at the first symbol I process, and I commit it when all symbols are parsed.
I'll implement a commit every N, but as I already wrote to Malerba on gnome-db mailing list, that won't improve performances.

> That is not what I meant. Make the
> transactions in batch of reasonable size, for example by keeping a counter of
> symbols added in a transaction and flushing when you reach N symbols, if
> nothing else. N being experimentally optimal. Originally, I thought you could
> bracket it around directory, but I suppose N symbols could is equally effective
> and less work. Also, get rid of symbol memory hash table completely too, so
> that we don't count in its overhead too. Then let's see how it does.

I'll try this out too.

> 
> > With my method instead it takes 15.93 seconds, flushing included.
> > 
> How long does current implementation (before your method) it takes?

I'll let you know just after this comment.

> 
> (In reply to comment #38)
> > Some more numbers (on my idea):
> > 
> > + removing indexes from symbol table would only drop 0.5 seconds from flush.
> > - the odd thing is that recreating them takes a while and if not created the
> > second step processing'll take ages to complete, due to full scans.
> > 
> Let's not touch the indexes yet.
> 
> > * hashtable check is necessary because we must be 100% sure that the ids we're
> > flushing are really the ones calculated on hashtable, without dups. This
> > because the "PRIMARY KEY AUTOINCREMENT" would automatically map them 1 to 1
> > when inserting.
> 
> I am sure I read somewhere the autoincrements are not guaranteed to be
> sequential. So, keeping track of DB ids by a variable incremented is very
> unstable hack. one step where it's wrong and everything afterwards is wrong.
> You might want to check it out.
> 

http://www.sqlite.org/autoinc.html
note the last paragraph:
"Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential."

ok, but we're 100% sure that ensuring key uniqueness the ids will remain mapped correctly once on db.

> > Would you also like to kill tablemaps on sym_type and scope?
> 
> I would like to. Is there any particular reason for sym_type and scope to exist
> the way they are now? "scope" table seems just a way to generate persistent
> integer quarks. Perhaps, there is a better way to do it, say hay hashing, and
> get rid of it all together? "sym_type" looks a bit hairy too.
> 

eheh, you'd like to rewrite everything from scratch, db logic and relations included. I don't think it's a good idea at all.

> In the end, if you have trouble getting those tables synced properly, just have
> them all together in symbol table. Perhaps that would make this thing easier,
> at a little cost of DB size.

that would make a lot of data redundant, and we'll end maintaining a plain big bloated table. So why should we at all use a db then?
If that's the case, then a view should be considered, but being a relational db I would keep tables separated.
Comment 41 Massimo Cora' 2010-06-06 22:17:53 UTC
(In reply to comment #40)
> > How long does current implementation (before your method) it takes?
> 
> I'll let you know just after this comment.
> 

removing the *1st methods from being executed I get a 

TOTAL FIRST SCAN elapsed: 56.004919 and an average symbol inserting of 0.0018 sec/sym.

The only thing that remains is the deletion of __tmp_heritage_scope, which is an useless table.

So here we have:

56 secs: normal method.
35: symbol transactions, sym_type and scope tablemaps
15: everything with tablemaps
Comment 42 Massimo Cora' 2010-06-06 22:28:33 UTC
Created attachment 162893 [details] [review]
naba's idea with transaction commit every N symbols

this patch adds a counter and a transaction commit every N items.
It does not go far away from that 35 seconds. Sometimes we have 36 sometimes 35.
Comment 43 Massimo Cora' 2010-06-06 22:53:18 UTC
while we're on the subject: even the memory pool system shouldn't be on symbol-db, if we were using clean and "calm" code.
But we had to, because we had to avoid thousands of wild allocation/deallocations.
libgda didn't provide a fast way to manage objects like this way, that's why I wrote a *_take_static patch for the GdaHolder.
Personally I'm of the idea that if the ideas are supported by measurable data, i.e. the transactions on symbol, and the gain isn't that much, transactions on symbol would be the best choice.
I'll continue working and trying a way to write clean and safe code to push that 35 seconds down. As I already told you, I'm a bit pessimist but I can try.
On the other hand I'd really like to hear other opinions, for instance by users but also other developers. If for you 35 seconds is enought than ok, no problems, we'll let it that way.

What I'd like to avoid is coming to a point where users can open bugs like the ancient bug #172099, or this one too, that brought me to completely purge gedit from my all my boxes because totally inefficient. The fixes weren't that difficult, but to adhere to a slow boring standard code they made users unhappy, me for first (ok this is just a rant against gedit, forgive me please:))
Comment 44 Naba Kumar 2010-06-07 20:30:44 UTC
Hi Massimo,

(In reply to comment #40)
> 
> in theory this is true, and in our case it would be much more true if we
> weren't using an extra layer as libgda.
> If we were using only pure sqlite calls then I would consider as a mad thing to
> create our own structs just to flush them later. And, again, I would be 100%
> with your vision.
> But there's a but: IMHO libgda isn't fast enought, and a too much generic layer
> to permit very performant accesses to db as we need.
> 
If libgda results in creating layers of complexity (values spooling, memory tables, transactionless populations etc.), then clearly it has failed to serve us right. Then we should just get rid of it. It's probably much easier porting to sqlite than all the added layers on top, besides libanjuta takes care of abstracting symbols for rest so we don't gain much from libgda anyways. That said, let's keep the option open until we try a couple of other things.

> 
> true. But it's also true that we need the initial boost just once. All the
> other cases are normal ones.
> 
Let's try something that's also useful for other parts.

> 
> that would make a lot of data redundant, and we'll end maintaining a plain big
> bloated table. So why should we at all use a db then?
> If that's the case, then a view should be considered, but being a relational db
> I would keep tables separated.

Looking at anjuta db:

sqlite> select count(*) from symbol;
29030
sqlite> select count(*) from sym_type;
21194
sqlite> select count(*) from scope;
19817

That's almost 1:1 mapping between the three tables instead of N:1 normally used to gain normalization. So whatever reasons they were separate tables, they don't exist anymore. I think you can safely combine them and gain simplicity at least.

Cutting the chase, let me put all the things we discussed in the order of usefulness, including your hash memory tables. We try them one after another. I try to explain why they are in given order. Also, the steps start from your current master (so it includes your hash tables stuff). Note that you can stop anywhere after attending step 4, depending on your performance satisfaction.

1) Start using transactions correctly. Not single massive transaction, not transactionless, but a real batched transactions. You already tried it and comment #41 seems to indicate at least 40% improvement over the original. That's a good start, and certainly not the end. This is first step because this is the right thing to do and also probably the easiest from current master.

2) Keep using your hash tables, but reduce symbol-hash-table to {key=>int} to only track your custom symbol ID increments. Do you use anything else from that record? If you do, you can keep them for now. The rest of the data in that hash/cache is not needed and go directly in the inserts. Basically, you insert the data directly with sufficiently working N batches, but at the same time also add in your hash-tables. It will reduce an unneeded indirection at least.

3) Combine symbol, sym_type and scope tables. As I see, there is no advantage in keeping them separate. I see only disadvantages such as (a) inserting a symbol requires multiple select/insert pairs per symbol (3 pairs?), (b) Prevents you from using transactions (this is actually single most reason you should consider) and (c) requires you to invent your own pseudo-database because of a and b. Doing this should not require re-write of whole thing. Once I finish the sdb-queries branch, there is only place to update it for query-side. For population-side, perhaps 2 or 3 for the insert/update, but more or less should be trivial. To replace "scope" table, use a persistent hashing function (see http://www.cogs.susx.ac.uk/courses/dats/notes/html/node114.html for an example) on current constraint of the table (namely, symbol+type).

4) Step 3 should allow you to get rid of the remaining 3 hastables and resort to single insert per symbol (which can conveniently fail as necessary). This is the point where I would say you are ready for micro-optimizations. So, measure the performance and see if you want to continue the following steps.

5) Disable journaling. This will improve the transactions. Journaling requires disk access which prevents it from becoming truly memory-only operation. However, there is potential of data corruption, so it should really be disabled if the gain is significant compared to loss. If the total scan if fast enough, we can do a total scan in case of failed interruption (you can mark it with begin/end file somewhere).

6) Disable unused indexes and enable them at the end. This usually helps a bit, as you pointed out, and is also correct thing to do with massive updates -- a cheap gain.

7) Optimize libgda statement execution path. Which includes reducing allocation/deallocations during each statement cycle. First, try using local GValue structs (allocated in stack) - that should prevent GValue creation/destruction.

8) Try to reduce other string allocation/deallocations in statement execution path. Try other little tricks etc.

9) Then try your _set_static_value() allocated with g_slice(). In theory, it should be pretty close to stack allocation above, unless libgda does something funky with _set_value(), if so fix libgda -- under no circumstances should an API taking GValue involve allocation/deallocation of GValues inside just to pass along the value, which is what GValue is meant for ultimately. You have done this (or something close to it) all over symbol-db, but I am curious why step 7 didn't work equally if you have tried it.

10) Try temp_store=memory + journaling=off to create truly memory only tables and use it to populate your tables. Then transfer them to disk in once scoop at the end (I suppose you can do it in single sql statement). This should be similar to your current hash table approach. Not as fast, but faster than disk approach, with the added advantage that you don't introduce custom memory tables yourself (hence, your code stays clean and maintainable).

10) Get rid of libgda. It's actually not that hard. For query-side, it's only about updating symbol-query(-result).[ch]. And some update in engine for population-side (which should have become much easier after steps 1 to 4).

11) If this is still not sufficient, you can bring back your symbol memory table and avoid insert collisions at C level rather than at sqlite level. And/Or prevent expensive selects by caching the data (if you still need them at this point, that is). You can even try caching other tables too (such as file). You don't want to do it before above steps, because this adds extra maintenance burden and ugligess, somehow questioning how everyone else in the world managed using sqlite just fine. At least by reaching step 11, you can truly claim that sqlite is not good enough standalone and therefore requiring you to go with such arrangements. So you keep it as last resort.

Let me know if you see any of these steps is unfeasible. Then we discuss how to make them feasible and refine.
Comment 45 Naba Kumar 2010-06-07 20:55:20 UTC
Actually, let's bring step 10 "get rid of libgda" after step 5. In the end I think we are only using it for query and inserts which mean no specific advantage over sqlite. Plus libanjuta symbol API takes care of abstraction to the rest.
Comment 46 Massimo Cora' 2010-06-08 21:51:32 UTC
(In reply to comment #44)
> Hi Massimo,
> 
> (In reply to comment #40)
> > 
> > in theory this is true, and in our case it would be much more true if we
> > weren't using an extra layer as libgda.
> > If we were using only pure sqlite calls then I would consider as a mad thing to
> > create our own structs just to flush them later. And, again, I would be 100%
> > with your vision.
> > But there's a but: IMHO libgda isn't fast enought, and a too much generic layer
> > to permit very performant accesses to db as we need.
> > 
> If libgda results in creating layers of complexity (values spooling, memory
> tables, transactionless populations etc.), then clearly it has failed to serve
> us right. Then we should just get rid of it. It's probably much easier porting
> to sqlite than all the added layers on top, besides libanjuta takes care of
> abstracting symbols for rest so we don't gain much from libgda anyways. That
> said, let's keep the option open until we try a couple of other things.
> 
> > 
> > true. But it's also true that we need the initial boost just once. All the
> > other cases are normal ones.
> > 
> Let's try something that's also useful for other parts.
> 
> > 
> > that would make a lot of data redundant, and we'll end maintaining a plain big
> > bloated table. So why should we at all use a db then?
> > If that's the case, then a view should be considered, but being a relational db
> > I would keep tables separated.
> 
> Looking at anjuta db:
> 
> sqlite> select count(*) from symbol;
> 29030
> sqlite> select count(*) from sym_type;
> 21194
> sqlite> select count(*) from scope;
> 19817
> 
> That's almost 1:1 mapping between the three tables instead of N:1 normally used
> to gain normalization. So whatever reasons they were separate tables, they
> don't exist anymore. I think you can safely combine them and gain simplicity at
> least.

well, I have to agree with you here.
Despite this could be some work, it can be a thing to try. Less joins, more speed. I have some fears that this can break your new sdb-queries, as the joins you're going to do, the retrieving of data and so on.
Btw: I had a quick look at your code. Can you please explain its big picture so that it's quicker to understand?
I think I'll remove, in any case, tables as ext_include, file_ignore, ext_ignore, which have never been used.


> 
> Cutting the chase, let me put all the things we discussed in the order of
> usefulness, including your hash memory tables. We try them one after another. I
> try to explain why they are in given order. Also, the steps start from your
> current master (so it includes your hash tables stuff). Note that you can stop
> anywhere after attending step 4, depending on your performance satisfaction.

maybe I can create a new remote branch where to put changes, to avoid breaks on master. Probably I could also tag the completed steps so that it's easier to track them.

> 
> 1) Start using transactions correctly. Not single massive transaction, not
> transactionless, but a real batched transactions. You already tried it and
> comment #41 seems to indicate at least 40% improvement over the original.
> That's a good start, and certainly not the end. This is first step because this
> is the right thing to do and also probably the easiest from current master.
> 

ok I have to ask Malerba if libgda supports multiple concurrent transactions.

> 2) Keep using your hash tables, but reduce symbol-hash-table to {key=>int} to
> only track your custom symbol ID increments. Do you use anything else from that
> record? If you do, you can keep them for now. The rest of the data in that
> hash/cache is not needed and go directly in the inserts. Basically, you insert
> the data directly with sufficiently working N batches, but at the same time
> also add in your hash-tables. It will reduce an unneeded indirection at least.

basically on that hashtable (and relative queue) I put a struct where I store the collected data from symbols, avoiding the more complex flow on sdb_engine_add_new_symbol (), which would have been unnecessary overloaded.

> 
> 3) Combine symbol, sym_type and scope tables. As I see, there is no advantage
> in keeping them separate. I see only disadvantages such as (a) inserting a
> symbol requires multiple select/insert pairs per symbol (3 pairs?), (b)
> Prevents you from using transactions (this is actually single most reason you
> should consider) and (c) requires you to invent your own pseudo-database
> because of a and b. Doing this should not require re-write of whole thing. Once
> I finish the sdb-queries branch, there is only place to update it for
> query-side. For population-side, perhaps 2 or 3 for the insert/update, but more
> or less should be trivial. To replace "scope" table, use a persistent hashing
> function (see http://www.cogs.susx.ac.uk/courses/dats/notes/html/node114.html
> for an example) on current constraint of the table (namely, symbol+type).

If I'm not able to use multiple transactions I think that merge would be necessary.
Is the hash function to replace scope.scope_name+scope.type_id?
Do we have an equivalent on glib to get the same hash function? I suppose yes

> 
> 4) Step 3 should allow you to get rid of the remaining 3 hastables and resort
> to single insert per symbol (which can conveniently fail as necessary). This is
> the point where I would say you are ready for micro-optimizations. So, measure
> the performance and see if you want to continue the following steps.

If we drop under 15 seconds I would stop here, I think.

5) dropping libgda. This could a lot of work. I hope to be able to have decent results by just merging the tables. 

> 
> 5) Disable journaling. This will improve the transactions. Journaling requires
> disk access which prevents it from becoming truly memory-only operation.
> However, there is potential of data corruption, so it should really be disabled
> if the gain is significant compared to loss. If the total scan if fast enough,
> we can do a total scan in case of failed interruption (you can mark it with
> begin/end file somewhere).

I'm currently disabling journaling (see PRAGMA journal_mode = OFF), and the gain is quite big over leaving it activated.

> 
> 6) Disable unused indexes and enable them at the end. This usually helps a bit,
> as you pointed out, and is also correct thing to do with massive updates -- a
> cheap gain.

ok

> 
> 7) Optimize libgda statement execution path. Which includes reducing
> allocation/deallocations during each statement cycle. First, try using local
> GValue structs (allocated in stack) - that should prevent GValue
> creation/destruction.

gda_holder_take_value () copies the GValue if its contents have changed on old one. 
see http://git.gnome.org/browse/libgda/tree/libgda/gda-holder.c , line 882 .
This is inefficient, but safe, from a non-performant vision.

> 
> 8) Try to reduce other string allocation/deallocations in statement execution
> path. Try other little tricks etc.

yep, even if I think that here we could gain a single millisecond over 1k files. 

> 
> 9) Then try your _set_static_value() allocated with g_slice(). In theory, it
> should be pretty close to stack allocation above, unless libgda does something
> funky with _set_value(), if so fix libgda -- under no circumstances should an
> API taking GValue involve allocation/deallocation of GValues inside just to
> pass along the value, which is what GValue is meant for ultimately. You have
> done this (or something close to it) all over symbol-db, but I am curious why
> step 7 didn't work equally if you have tried it.
> 

I didn't know g_slice (), if not just for its name, before you used it on symbol-db-model&co.
I can use the memory pool of gvalues allocated with g_slice.
As written above libgda made copies of GValues.

GValue *
tp_g_value_slice_new (GType type)
{
  GValue *ret = g_slice_new0 (GValue);

  g_value_init (ret, type);
  return ret;
}


> 10) Try temp_store=memory + journaling=off to create truly memory only tables
> and use it to populate your tables. Then transfer them to disk in once scoop at
> the end (I suppose you can do it in single sql statement). This should be
> similar to your current hash table approach. Not as fast, but faster than disk
> approach, with the added advantage that you don't introduce custom memory
> tables yourself (hence, your code stays clean and maintainable).

I already tried this step (comment #27), but in-memory tables are slower than disk ones.

> 
> 10) Get rid of libgda. It's actually not that hard. For query-side, it's only
> about updating symbol-query(-result).[ch]. And some update in engine for
> population-side (which should have become much easier after steps 1 to 4).
> 
> 11) If this is still not sufficient, you can bring back your symbol memory
> table and avoid insert collisions at C level rather than at sqlite level.
> And/Or prevent expensive selects by caching the data (if you still need them at
> this point, that is). You can even try caching other tables too (such as file).
> You don't want to do it before above steps, because this adds extra maintenance
> burden and ugligess, somehow questioning how everyone else in the world managed
> using sqlite just fine. 

this is what I was wondering since when I started writing symbol-db...
From my tests 20k symbols can be inserted directly from command line in 0.6 seconds. With libgda & co it takes 1.9.

At least by reaching step 11, you can truly claim that
> sqlite is not good enough standalone and therefore requiring you to go with
> such arrangements. So you keep it as last resort.
> 
> Let me know if you see any of these steps is unfeasible. Then we discuss how to
> make them feasible and refine.

I'll try to do my best to follow the steps you pointed out.
thanks
Comment 47 Massimo Cora' 2010-06-08 23:08:22 UTC
(In reply to comment #46)
> Btw: I had a quick look at your code. Can you please explain its big picture so
> that it's quicker to understand?


Actually I've looked deeper at it. It's pretty neat, I like it.
I've just a little question, which I didn't calculate following the nested for loops:
When you issue a query, for instance get_members (), how much info do you retrieve? Do you get from file_path to the sym_access to all fields on symbol etc?
Joining a lot of tables, if that's the case, isn't fast.
On my implementation I left user decide which info to retrieve, to make queries faster. On your way, if get_members () doesn't get the file_path for instance, how can I retrieve it?
Comment 48 Naba Kumar 2010-06-09 14:08:50 UTC
(In reply to comment #47)
> (In reply to comment #46)
> > Btw: I had a quick look at your code. Can you please explain its big picture so
> > that it's quicker to understand?
> 
> 
> Actually I've looked deeper at it. It's pretty neat, I like it.

I am glad you like it. The big picture is essentially what we discussed in mailing list: http://sourceforge.net/mailarchive/message.php?msg_name=AANLkTilysL8wAiWUpcUUL0jtYU7PB9MDrqwaGk1nmy4S%40mail.gmail.com

There is a query class where you set various parameters and run it. Then, there is the result iterator class. That's all. There is still a bit more work to clean it up (and make it work :)). I am closing to finish it and have already started porting other plugins to use the new API - to test.

> When you issue a query, for instance get_members (), how much info do you
> retrieve? Do you get from file_path to the sym_access to all fields on symbol
> etc?

You set what fields you want to be there with IAnjutaSymbolQuery::set_fields(). Only they are retrieved.
Comment 49 Massimo Cora' 2010-06-15 21:49:47 UTC
(In reply to comment #46)
> > 
> > 1) Start using transactions correctly. Not single massive transaction, not
> > transactionless, but a real batched transactions. You already tried it and
> > comment #41 seems to indicate at least 40% improvement over the original.
> > That's a good start, and certainly not the end. This is first step because this
> > is the right thing to do and also probably the easiest from current master.
> > 
> 

I started a testing local branch where I used multiple connections to the same db. Each transaction had to populate a table. But unfortunately I discovered this:
http://www.sqlite.org/faq.html#q5: 

"Multiple processes can have the same database open at the same time. Multiple processes can be doing a SELECT at the same time. But only one process can be making changes to the database at any moment in time, however."

indeed by cmd line I have:
sqlite> begin transaction "fgh";
sqlite> insert into t values (5);
Error: database is locked

On the other side, two or more nested transactions within the same connection won't work too.

sqlite> begin transaction "asd";
sqlite> begin transaction "fgh";
Error: cannot start a transaction within a transaction

Then my idea to use more transactions at the same time isn't doable.
This makes me think that the only solution is to rescan the ctags output more times so that tables can be populated with a transaction per time. But sincerely I dislike this option.
Another road would be to have transactions inserting/selecting into/from multiple tables (symbol, scope, sym_type, sym_kind etc) for a N symbols and then commit. I remember that I tried this way when symbol-db was next to completion, but it wasn't successful.


> 
> > 2) Keep using your hash tables, but reduce symbol-hash-table to {key=>int} to
> > only track your custom symbol ID increments. Do you use anything else from that
> > record? If you do, you can keep them for now. The rest of the data in that
> > hash/cache is not needed and go directly in the inserts. Basically, you insert
> > the data directly with sufficiently working N batches, but at the same time
> > also add in your hash-tables. It will reduce an unneeded indirection at least.
> 

this would require, as written above, selects and inserts in the same transaction, and I don't think that this would be performant.


> > 
> > 3) Combine symbol, sym_type and scope tables. As I see, there is no advantage
> > in keeping them separate. I see only disadvantages such as (a) inserting a
> > symbol requires multiple select/insert pairs per symbol (3 pairs?), (b)
> > Prevents you from using transactions (this is actually single most reason you
> > should consider) and (c) requires you to invent your own pseudo-database
> > because of a and b. Doing this should not require re-write of whole thing. Once
> > I finish the sdb-queries branch, there is only place to update it for
> > query-side. For population-side, perhaps 2 or 3 for the insert/update, but more
> > or less should be trivial. To replace "scope" table, use a persistent hashing
> > function (see http://www.cogs.susx.ac.uk/courses/dats/notes/html/node114.html
> > for an example) on current constraint of the table (namely, symbol+type).
> 
> If I'm not able to use multiple transactions I think that merge would be
> necessary.
> Is the hash function to replace scope.scope_name+scope.type_id?
> Do we have an equivalent on glib to get the same hash function? I suppose yes
> 

I thought a lot about merging scope table.
While on a side sym_type seems to be not a problem to merge (at least there are no major issues), scope table defines something that is above the symbol itself.
The merge would also collide with bug #615403, which is worth fixing now avoiding to return hacking on the symbol-db core.
On the table you print on that bug you associate to the scope a line and a file. I'm thinking that it isn't that correct. Scope is something purely abstract, and so it wouldn't be associated to a specific file. I would instead add to scope table its parent scope, if it has one. That would solve the parent/child relationship.
What I still cannot get is how to get the relationship at population time.
Comment 50 Naba Kumar 2010-06-16 19:57:08 UTC
(In reply to comment #49)
> 
> I started a testing local branch where I used multiple connections to the same
> db. Each transaction had to populate a table. But unfortunately I discovered
> this:
> http://www.sqlite.org/faq.html#q5: 
> 
> "Multiple processes can have the same database open at the same time. Multiple
> processes can be doing a SELECT at the same time. But only one process can be
> making changes to the database at any moment in time, however."
> 
> ...
> Then my idea to use more transactions at the same time isn't doable.

I am not sure I understand why you need multiple transactions at once. Is it because you think multiple instances of anjuta might have problem?, Is it because there are multiple tables to populate? or because because you do some selects during a transactions (like accessing other files?)

If it is 1st, then I don't think that's an issue since our transactions will be quite small (in secs I guess), so each instance will just share it one after another.

If it is 2nd, then I though you can use the same transaction to populate multiple tables?

If it is 3rd, then we need to figure out a way to avoid running direct selects in middle of a transaction. For instance, if you want to get a file ID, you run it as subquery to your insert statement. Same goes for other tables?

I think I will need to understand better what are exact problems.

> 
> I thought a lot about merging scope table.
> While on a side sym_type seems to be not a problem to merge (at least there are
> no major issues), scope table defines something that is above the symbol
> itself.
> The merge would also collide with bug #615403, which is worth fixing now
> avoiding to return hacking on the symbol-db core.

The "scope" table is not used in query side in sdb-queries branch (I don't remember if you have used it before), so the table is only use currently during symbol population.

The only relevant thing used on query side is the scope_definition_id/scope_id "numbers" in symbol table, which is just an integer generated in a way to identify a unique scope. This is where I suggested if we could replace it with a hash function instead. On population side, do you have problem populating scope_definition_id/scope_id without involving a table?

As for sym_type, it's just a few lines of changes to fix query side. So we could proceed with this change. I would recommend to wait for my branch to merge to master which will make it easy to change table joins for it (or you do the changes based off sdb-queries branch -- it's almost functioning).

> On the table you print on that bug you associate to the scope a line and a
> file. I'm thinking that it isn't that correct. Scope is something purely
> abstract, and so it wouldn't be associated to a specific file. I would instead
> add to scope table its parent scope, if it has one. That would solve the
> parent/child relationship.

That seems like a good idea. So, scope_id = hash(symbol,parent,type)? Currently, I think the table generates scope_id = hash(symbol,type).

> What I still cannot get is how to get the relationship at population time.

Yes, that might be tricky if you don't know parent of a symbol at population time. I guess we need find what is a feasible combination to generate scope_id.
Comment 51 Massimo Cora' 2010-06-16 21:34:34 UTC
(In reply to comment #50)
> (In reply to comment #49)
> > 
> > I started a testing local branch where I used multiple connections to the same
> > db. Each transaction had to populate a table. But unfortunately I discovered
> > this:
> > http://www.sqlite.org/faq.html#q5: 
> > 
> > "Multiple processes can have the same database open at the same time. Multiple
> > processes can be doing a SELECT at the same time. But only one process can be
> > making changes to the database at any moment in time, however."
> > 
> > ...
> > Then my idea to use more transactions at the same time isn't doable.
> 
> I am not sure I understand why you need multiple transactions at once. Is it
> because you think multiple instances of anjuta might have problem?, Is it
> because there are multiple tables to populate? or because because you do some
> selects during a transactions (like accessing other files?)
> 
> If it is 1st, then I don't think that's an issue since our transactions will be
> quite small (in secs I guess), so each instance will just share it one after
> another.
> 
> If it is 2nd, then I though you can use the same transaction to populate
> multiple tables?
> 
> If it is 3rd, then we need to figure out a way to avoid running direct selects
> in middle of a transaction. For instance, if you want to get a file ID, you run
> it as subquery to your insert statement. Same goes for other tables?
> 

It's not 1st because the cases in which anjuta access the same project db are small. For the global one it's just a matter of population the first time and then all the others are read accesses.

it's 2nd.
My idea of multiple transactions at the same time was to populate different tables each with a different transaction. I.e. this would have removed the table maps/hashtables and the data would have gone directly into db (first cached then on disk).
The population of multiple tables with a single transaction (broken into chunks of N symbols) isn't performant. I already tried this road some time ago. 
There's also the selects done in between the inserts.

You gave me an idea anyway! For legacy reasons I was forced to use plain insert queries with prepared statements, because libgda didn't support subqueries on them: the sql parser was at its early stages and didn't help too much. This, associated to other libgda problems (last but not least the main branch 4.x), made me write functions like sdb_engine_get_tuple_id_by_unique_name* ().
Now I think I can really get rid of them and use sub queries directly into inserts.


> The "scope" table is not used in query side in sdb-queries branch (I don't
> remember if you have used it before), so the table is only use currently during
> symbol population.
> 

yes I use it to calculate the relationship between symbols in the second pass o the scan.

> The only relevant thing used on query side is the scope_definition_id/scope_id
> "numbers" in symbol table, which is just an integer generated in a way to
> identify a unique scope. This is where I suggested if we could replace it with
> a hash function instead. On population side, do you have problem populating
> scope_definition_id/scope_id without involving a table?
> 

hmm I have to think about it because its tricky.
Maybe with some simplifications it could be doable.

> As for sym_type, it's just a few lines of changes to fix query side. So we
> could proceed with this change. I would recommend to wait for my branch to
> merge to master which will make it easy to change table joins for it (or you do
> the changes based off sdb-queries branch -- it's almost functioning).
> 

ok, I'll start with sym_type. I'll let you merge changes into master, and I'll start another branch to make changes on inserts and sym_type.

> > On the table you print on that bug you associate to the scope a line and a
> > file. I'm thinking that it isn't that correct. Scope is something purely
> > abstract, and so it wouldn't be associated to a specific file. I would instead
> > add to scope table its parent scope, if it has one. That would solve the
> > parent/child relationship.
> 
> That seems like a good idea. So, scope_id = hash(symbol,parent,type)?
> Currently, I think the table generates scope_id = hash(symbol,type).
> 
> > What I still cannot get is how to get the relationship at population time.
> 
> Yes, that might be tricky if you don't know parent of a symbol at population
> time. I guess we need find what is a feasible combination to generate scope_id.

ok. So first of all I'll merge sym_type, then we'll think about scope.
Comment 52 Massimo Cora' 2010-06-22 23:15:35 UTC
Created attachment 164365 [details]
callgrind output

Benchmarking symbol-db on branch sdb-core-trans produces the attached file.
There's a lot of data to analyze.
A lot of time seems to be used on sdb_engine_extract_type_qualifier (), but after commenting it out the average gain is minimal (1 ms of the total average).

i still have to analyze the other data, and the bottlenecks should be searched on other ways, like:
3.12 % g_value_copy --> gda_value_copy ().
5.09 % g_hash_table_lookup --> there's a lot of gda_* functions related.


a README has been added into benchmark/ to make benchmark things easier.
Comment 53 Massimo Cora' 2010-06-22 23:16:31 UTC
(In reply to comment #52)

Oh, I was forgetting, use kcachegrind to analyze it.
Comment 54 Massimo Cora' 2010-06-23 21:27:17 UTC
Created attachment 164438 [details]
cachegrind output

a cachegrind analysis.
Comment 55 Naba Kumar 2010-06-25 09:27:41 UTC
(In reply to comment #52)
> 
> i still have to analyze the other data, and the bottlenecks should be searched
> on other ways, like:
> 3.12 % g_value_copy --> gda_value_copy ().
> 5.09 % g_hash_table_lookup --> there's a lot of gda_* functions related.
> 
Before you get into optimizing low level functions, try to get a commutative profile of the functions. Then, start top-down from the most expensive functions. Move downwards, as you run out of ideas to optimize the higher functions. (i.e. you can either optimize a copy function which is called N times, or you can reduce N to N/2 in the higher function).

For example, here is callgrind output with commutative sort (callgrind_annotate --inclusive=yes callgrind.out)

vadmin@ubuntu:~/git/anjuta$ callgrind_annotate --inclusive=yes callgrind.out
--------------------------------------------------------------------------------
Profile data file 'callgrind.out' (creator: callgrind-3.5.0-Debian)
--------------------------------------------------------------------------------
I1 cache: 
D1 cache: 
L2 cache: 
Timerange: Basic block 0 - 6617141003
Trigger: Program termination
Profiled target:  ./.libs/benchmark test-dir (PID 8335, part 1)
Events recorded:  Ir sysCount sysTime
Events shown:     Ir sysCount sysTime
Event sort order: Ir sysCount sysTime
Thresholds:       99 0 0
Include dirs:     
User annotated:   
Auto-annotation:  off

--------------------------------------------------------------------------------
            Ir sysCount sysTime 
--------------------------------------------------------------------------------
29,034,927,415  171,403 510,982  PROGRAM TOTALS

--------------------------------------------------------------------------------
            Ir sysCount sysTime  file:function
--------------------------------------------------------------------------------
26,842,956,477  118,418 489,159  /home/aurel32/eglibc/eglibc-2.11.1/nptl/pthread_create.c:start_thread [/lib/libpthread-2.11.1.so]
26,842,577,218  118,344 488,938  ???:0x00000000000676a0 [/lib/libglib-2.0.so.0.2400.1]
26,842,555,755   88,221 421,617  ???:0x0000000000069390 [/lib/libglib-2.0.so.0.2400.1]
26,825,250,364   77,877 417,166  ???:sdb_engine_ctags_output_thread [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
26,806,179,963   30,853 346,256  ???:sdb_engine_populate_db_by_tags [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
26,488,978,025   20,718 321,353  ???:sdb_engine_add_new_symbol [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
11,627,208,006   11,136 128,387  /home/pescio/gitroot/libgda/libgda/gda-connection.c:gda_connection_statement_execute_v [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
11,249,060,576   10,777 122,759  /home/pescio/gitroot/libgda/libgda/sqlite/gda-sqlite-provider.c:gda_sqlite_provider_statement_execute [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 8,581,044,190    9,921 107,939  ???:g_signal_emit [/usr/lib/libgobject-2.0.so.0.2400.1]
 8,546,151,028    9,906 107,689  ???:g_signal_emit_valist [/usr/lib/libgobject-2.0.so.0.2400.1]
 7,531,953,317    6,797 104,801  /home/pescio/gitroot/libgda/libgda/gda-holder.c:gda_holder_take_static_value [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 7,504,841,981    6,778 104,643  /home/pescio/gitroot/libgda/libgda/gda-holder.c:real_gda_holder_set_const_value [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 6,878,251,661    5,841  88,590  ???:sdb_engine_add_new_scope_definition [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 6,489,929,874    5,264  76,884  /home/pescio/gitroot/libgda/libgda/gda-connection.c:gda_connection_statement_execute_select [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 6,335,692,393    7,777  75,539  ???:0x00000000000227a0 [/usr/lib/libgobject-2.0.so.0.2400.1]
 6,207,333,419    3,540  63,090  ???:sdb_engine_extract_type_qualifier [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 6,027,274,621    5,778  70,406  ???:g_object_new [/usr/lib/libgobject-2.0.so.0.2400.1]
 5,943,113,710    4,765  69,373  ???:g_object_new_valist [/usr/lib/libgobject-2.0.so.0.2400.1]
 5,561,066,995   19,620  64,978  ???:sdb_engine_get_tuple_id_by_unique_name [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 5,309,295,292    5,264  63,024  ???:g_object_newv [/usr/lib/libgobject-2.0.so.0.2400.1]
 5,258,441,138    6,727  60,185  ???:g_closure_invoke [/usr/lib/libgobject-2.0.so.0.2400.1]
 5,198,087,866    5,922  52,262  /home/pescio/gitroot/libgda/libgda/gda-connection.c:gda_connection_statement_execute_non_select [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 4,995,570,185    4,271  64,425  ???:sdb_engine_get_tuple_id_by_unique_name2 [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 4,774,937,956    3,855  54,824  /home/pescio/gitroot/libgda/libgda/sqlite/gda-sqlite-recordset.c:_gda_sqlite_recordset_new [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 4,351,119,236    3,472  53,357  ???:sdb_engine_add_new_sym_kind [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 4,160,688,818   20,189  34,963  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3_step [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 4,132,710,962   20,160  34,519  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3Step [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 4,116,395,237   20,156  34,447  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3VdbeExec [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 4,006,613,513    2,311  39,602  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:regcomp [/lib/libc-2.11.1.so]
 3,737,028,846    2,207  37,738  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:re_compile_internal [/lib/libc-2.11.1.so]
 3,298,084,054    2,114  38,179  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_reg_exp'2 [/lib/libc-2.11.1.so]
 3,295,128,708    2,114  38,179  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_branch'2 [/lib/libc-2.11.1.so]
 3,279,157,565    2,110  38,148  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_expression'2 [/lib/libc-2.11.1.so]
 3,054,402,811    2,421  35,236  /home/pescio/gitroot/libgda/libgda/gda-data-select.c:gda_data_select_set_property [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 2,908,818,930    2,637  38,671  /home/pescio/gitroot/libgda/libgda/gda-custom-marshal.c:_gda_marshal_ERROR__VALUE [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 2,827,519,943    2,284  33,561  ???:0x00000000000126c0 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,608,587,221    2,332  34,017  ???:g_signal_emit'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,589,221,296    2,326  33,974  ???:g_signal_emit_valist'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,528,010,960    1,996  29,192  ???:g_object_new'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,515,978,543    1,980  29,129  ???:g_object_new_valist'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,496,014,741    2,249  32,901  /home/pescio/gitroot/libgda/libgda/gda-set.c:validate_change_holder_cb [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 2,298,759,219    1,833  26,922  ???:g_object_newv'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,213,969,357    2,001  32,137  ???:g_object_unref [/usr/lib/libgobject-2.0.so.0.2400.1]
 2,187,189,426   52,065  21,610  ???:0x0000000000401330 [/home/pescio/gitroot/anjuta/plugins/symbol-db/benchmark/.libs/benchmark]
 2,187,187,072   52,065  21,610  /home/aurel32/eglibc/eglibc-2.11.1/csu/libc-start.c:(below main) [/lib/libc-2.11.1.so]
 2,187,186,863   52,065  21,610  ???:main [/home/pescio/gitroot/anjuta/plugins/symbol-db/benchmark/.libs/benchmark]
 2,176,889,570    1,453  24,816  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_reg_exp [/lib/libc-2.11.1.so]
 2,176,738,498    1,453  24,816  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_branch [/lib/libc-2.11.1.so]
 2,170,441,092    1,448  24,633  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:parse_expression [/lib/libc-2.11.1.so]
 2,076,308,323    1,606  23,371  /home/pescio/gitroot/libgda/libgda/gda-set.c:gda_set_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,773,713,121   32,671  20,496  ???:gtk_main [/usr/lib/libgtk-x11-2.0.so.0.2000.1]
 1,773,706,837   32,671  20,496  ???:g_main_loop_run [/lib/libglib-2.0.so.0.2400.1]
 1,773,648,768   32,668  20,476  ???:0x0000000000042030 [/lib/libglib-2.0.so.0.2400.1]
 1,740,878,283    9,976  19,968  ???:g_main_context_dispatch [/lib/libglib-2.0.so.0.2400.1]
 1,736,624,622      819  16,762  /home/aurel32/eglibc/eglibc-2.11.1/posix/regexec.c:regexec@@GLIBC_2.3.4 [/lib/libc-2.11.1.so]
 1,735,892,192      818  16,745  /home/aurel32/eglibc/eglibc-2.11.1/posix/regexec.c:re_search_internal [/lib/libc-2.11.1.so]
 1,676,766,158    3,765  14,453  ???:0x000000000003ee80 [/lib/libglib-2.0.so.0.2400.1]
 1,676,085,023    3,764  14,453  ???:sdb_engine_timeout_trigger_signals [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 1,669,078,756    1,518  23,465  ???:g_object_unref'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 1,662,669,772    2,991  14,183  ???:g_cclosure_marshal_VOID(intXX_t) [/usr/lib/libgobject-2.0.so.0.2400.1]
 1,662,669,679    2,991  14,183  ???:on_scan_end [/home/pescio/gitroot/anjuta/plugins/symbol-db/benchmark/.libs/benchmark]
 1,662,399,724    2,988  14,129  ???:symbol_db_engine_close_db [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 1,662,391,391    2,984  14,127  ???:sdb_engine_disconnect_from_db [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 1,661,922,087    2,979  14,127  ???:sdb_engine_execute_non_select_sql [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
 1,661,761,241    2,979  14,127  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3RunVacuum [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,658,212,204    1,903  14,108  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:execExecSql [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,657,520,874    1,912  14,108  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:execSql [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,654,924,926    1,910  14,025  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3_step'2 [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,654,903,383    1,910  14,025  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3Step'2 [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,654,891,195    1,910  14,025  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3VdbeExec'2 [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,619,137,247    1,431  22,551  /home/pescio/gitroot/libgda/libgda/sqlite/gda-sqlite-recordset.c:gda_sqlite_recordset_dispose [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,571,236,372    1,408  22,096  /home/pescio/gitroot/libgda/libgda/gda-data-select.c:gda_data_select_dispose [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,478,438,115      822  12,466  ???:g_hash_table_lookup [/lib/libglib-2.0.so.0.2400.1]
 1,477,775,493    1,338  21,059  /home/pescio/gitroot/libgda/libgda/gda-data-select.c:free_private_shared_data [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,368,020,367    1,445  10,750  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3BtreeInsert [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,349,516,707    1,324  21,064  ???:g_type_value_table_peek [/usr/lib/libgobject-2.0.so.0.2400.1]
 1,301,274,391    2,233  10,841  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:_int_malloc [/lib/libc-2.11.1.so]
 1,255,211,217      979  14,367  ???:0x00000000000126c0'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
 1,247,703,968    1,122  17,157  ???:g_value_type_compatible [/usr/lib/libgobject-2.0.so.0.2400.1]
 1,169,971,315    1,603  20,274  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:free [/lib/libc-2.11.1.so]
 1,163,385,039      788  11,661  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:calloc [/lib/libc-2.11.1.so]
 1,119,121,679   16,063  11,423  /home/pescio/gitroot/libgda/libgda/gda-data-model.c:gda_data_model_get_n_rows [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,098,421,932      520  10,622  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:calc_eclosure_iter'2 [/lib/libc-2.11.1.so]
 1,097,816,469   16,053  11,320  /home/pescio/gitroot/libgda/libgda/gda-data-select.c:gda_data_select_get_n_rows [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,085,052,036   16,031  11,070  /home/pescio/gitroot/libgda/libgda/sqlite/gda-sqlite-recordset.c:gda_sqlite_recordset_fetch_nb_rows [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,083,974,064   16,078  11,578  /home/pescio/gitroot/libgda/libgda/sqlite/gda-sqlite-recordset.c:fetch_next_sqlite_row [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
 1,077,849,687      768  11,851  ???:g_malloc0 [/lib/libglib-2.0.so.0.2400.1]
 1,072,318,134      530  11,605  /home/aurel32/eglibc/eglibc-2.11.1/posix/regexec.c:build_trtable [/lib/libc-2.11.1.so]
 1,050,802,973    1,148  18,951  ???:g_value_unset [/usr/lib/libgobject-2.0.so.0.2400.1]
   992,529,409    2,091   9,908  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:malloc [/lib/libc-2.11.1.so]
   982,363,784      874  12,669  /home/pescio/gitroot/libgda/libgda/gda-data-select.c:_gda_data_select_internals_free [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   967,414,015      796  11,689  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct.c:gda_sql_statement_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   951,338,042      784  11,992  /home/pescio/gitroot/libgda/libgda/gda-connection-event.c:gda_connection_event_new [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   941,725,182      599   6,366  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3BtreeMovetoUnpacked [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   919,670,430      837  11,825  /home/pescio/gitroot/libgda/libgda/gda-set.c:gda_set_dispose [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   906,186,126      812  12,578  ???:g_value_copy [/usr/lib/libgobject-2.0.so.0.2400.1]
   893,311,479      739  11,027  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-select.c:_gda_sql_statement_select_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   877,527,354      686  10,127  /home/pescio/gitroot/libgda/libgda/gda-holder.c:gda_holder_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   873,682,289      778  11,159  ???:0x00000000000227a0'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
   866,888,904      591   9,575  /home/aurel32/eglibc/eglibc-2.11.1/wcsmbs/btowc.c:btowc [/lib/libc-2.11.1.so]
   865,503,609      680   9,658  /home/pescio/gitroot/libgda/libgda/gda-set.c:gda_set_set_property [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   853,585,377      742  11,620  /home/pescio/gitroot/libgda/libgda/gda-connection.c:gda_connection_internal_statement_executed [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   823,004,717      808  11,413  ???:g_value_set_instance [/usr/lib/libgobject-2.0.so.0.2400.1]
   817,856,294      679   9,871  /home/pescio/gitroot/libgda/libgda/gda-statement.c:gda_statement_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   795,906,974    1,060   6,761  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:balance [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   793,040,395    1,021   6,761  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:balance_nonroot [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   785,679,274    1,104  11,024  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:_int_free [/lib/libc-2.11.1.so]
   778,506,530      738  10,887  ???:g_free [/lib/libglib-2.0.so.0.2400.1]
   732,122,100      584   8,059  ???:g_signal_connect_data [/usr/lib/libgobject-2.0.so.0.2400.1]
   717,567,301      589  10,593  ???:sdb_engine_add_new_sym_access [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
   712,562,534      606   9,453  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-parts.c:gda_sql_expr_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   709,972,529      560   7,964  /home/pescio/gitroot/libgda/libgda/gda-set.c:gda_set_real_add_holder [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   688,409,623      455   5,769  ???:g_type_is_a [/usr/lib/libgobject-2.0.so.0.2400.1]
   661,172,741   35,722  68,896  /home/aurel32/eglibc/eglibc-2.11.1/nptl/pthread_mutex_lock.c:pthread_mutex_lock [/lib/libpthread-2.11.1.so]
   643,135,537    5,673  12,338  /home/aurel32/eglibc/eglibc-2.11.1/nptl/pthread_mutex_unlock.c:pthread_mutex_unlock [/lib/libpthread-2.11.1.so]
   636,629,769      319   6,766  /home/aurel32/eglibc/eglibc-2.11.1/malloc/malloc.c:realloc [/lib/libc-2.11.1.so]
   635,624,967      626   9,472  ???:g_type_check_value [/usr/lib/libgobject-2.0.so.0.2400.1]
   633,131,735      565   8,392  ???:g_datalist_id_set_data_full [/lib/libglib-2.0.so.0.2400.1]
   628,941,418      358   4,941  ???:g_param_spec_pool_lookup [/usr/lib/libgobject-2.0.so.0.2400.1]
   626,616,067      564   9,879  ???:0x000000000000aea0 [/usr/lib/libgobject-2.0.so.0.2400.1]
   614,018,430      551   9,681  ???:0x0000000000025920 [/usr/lib/libgobject-2.0.so.0.2400.1]
   610,848,657      439   7,175  ???:g_malloc0_n [/lib/libglib-2.0.so.0.2400.1]
   599,351,913      533   7,382  ???:0x000000000000b9a0 [/usr/lib/libgobject-2.0.so.0.2400.1]
   593,371,162      602   9,166  ???:0x00000000000115b0 [/usr/lib/libgobject-2.0.so.0.2400.1]
   588,333,943    5,579  10,377  /home/aurel32/eglibc/eglibc-2.11.1/nptl/pthread_mutex_unlock.c:__pthread_mutex_unlock_usercnt [/lib/libpthread-2.11.1.so]
   558,597,348      449   7,409  ???:g_object_get [/usr/lib/libgobject-2.0.so.0.2400.1]
   553,523,640      444   7,381  ???:g_object_get_valist [/usr/lib/libgobject-2.0.so.0.2400.1]
   537,730,685      481   6,989  ???:g_value_transform [/usr/lib/libgobject-2.0.so.0.2400.1]
   534,142,949      484   7,021  ???:g_type_check_is_value_type [/usr/lib/libgobject-2.0.so.0.2400.1]
   533,702,824      395   5,585  ???:g_type_check_instance_is_a [/usr/lib/libgobject-2.0.so.0.2400.1]
   516,982,476      473   6,481  ???:g_signal_handlers_disconnect_matched [/usr/lib/libgobject-2.0.so.0.2400.1]
   515,890,850      503   6,158  ???:g_slice_alloc [/lib/libglib-2.0.so.0.2400.1]
   514,309,133      477   7,597  ???:g_type_create_instance [/usr/lib/libgobject-2.0.so.0.2400.1]
   510,544,906      547   8,245  ???:g_value_init [/usr/lib/libgobject-2.0.so.0.2400.1]
   507,549,106      355   4,326  ???:g_slice_free1 [/lib/libglib-2.0.so.0.2400.1]
   507,099,447      420   5,659  /home/pescio/gitroot/libgda/libgda/gda-statement.c:gda_statement_set_property [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   502,980,252      395   5,858  ???:g_type_check_value_holds [/usr/lib/libgobject-2.0.so.0.2400.1]
   500,661,405      233   3,898  /home/aurel32/eglibc/eglibc-2.11.1/wcsmbs/mbrtowc.c:mbrtowc [/lib/libc-2.11.1.so]
   499,395,888      376   6,092  /home/pescio/gitroot/libgda/libgda/gda-holder.c:gda_holder_set_attribute [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   492,626,969      399   6,408  /home/pescio/gitroot/libgda/libgda/gda-statement.c:gda_statement_get_property [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   489,523,441      515   8,737  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct.c:gda_sql_statement_free [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   478,989,842      408   7,056  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-parts.c:gda_sql_expr_copy'2 [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   478,510,675      439   6,008  ???:0x0000000000021a00 [/usr/lib/libgobject-2.0.so.0.2400.1]
   465,245,206      247   3,267  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3VdbeRecordCompare [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   465,229,894      229   5,120  /home/aurel32/eglibc/eglibc-2.11.1/posix/regex_internal.c:re_acquire_state_context [/lib/libc-2.11.1.so]
   456,187,772      403   6,634  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:regfree [/lib/libc-2.11.1.so]
   455,469,419      401   6,634  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:free_dfa_content [/lib/libc-2.11.1.so]
   453,146,384      480   8,182  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-select.c:_gda_sql_statement_select_free [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   449,026,468      197   3,557  /home/aurel32/eglibc/eglibc-2.11.1/posix/regcomp.c:calc_eclosure_iter [/lib/libc-2.11.1.so]
   426,052,844      300   4,803  /home/pescio/gitroot/libgda/libgda/gda-attributes-manager.c:gda_attributes_manager_get [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   423,728,634      209   3,591  /home/aurel32/eglibc/eglibc-2.11.1/locale/coll-lookup.c:__collseq_table_lookup [/lib/libc-2.11.1.so]
   414,486,497      179   3,644  /home/aurel32/eglibc/eglibc-2.11.1/posix/regex_internal.c:re_node_set_insert_last [/lib/libc-2.11.1.so]
   406,072,451      277   3,555  /home/aurel32/eglibc/eglibc-2.11.1/string/../sysdeps/x86_64/memcpy.S:memcpy [/lib/libc-2.11.1.so]
   399,223,000      284   2,988  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:btreeMoveto [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   388,273,770      319   4,668  ???:g_closure_invoke'2 [/usr/lib/libgobject-2.0.so.0.2400.1]
   384,809,434      327   5,385  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-parts.c:gda_sql_operation_copy [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   360,573,399      409   7,186  ???:g_slist_foreach [/lib/libglib-2.0.so.0.2400.1]
   359,492,108      316   4,481  ???:0x000000000000e100 [/usr/lib/libgobject-2.0.so.0.2400.1]
   357,477,418      338   4,894  ???:g_value_reset [/usr/lib/libgobject-2.0.so.0.2400.1]
   344,974,120      373   6,769  /home/pescio/gitroot/libgda/libgda/sql-parser/gda-statement-struct-parts.c:gda_sql_expr_free [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   344,545,928    2,211   3,401  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3VdbeHalt [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   342,338,943      227   3,363  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:sqlite3VdbeMemGrow [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   337,563,502    1,325   3,517  ???:g_strdup [/lib/libglib-2.0.so.0.2400.1]
   330,368,013      361   5,613  ???:0x0000000000002330 [/usr/lib/libgthread-2.0.so.0.2400.1]
   318,805,845      257   3,424  /home/pescio/gitroot/libgda/libgda/gda-holder.c:validate_change_accumulator [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   316,789,207      284   4,505  ???:g_value_get_boxed [/usr/lib/libgobject-2.0.so.0.2400.1]
   316,776,938      175   2,686  /home/aurel32/eglibc/eglibc-2.11.1/posix/regex_internal.c:build_wcs_buffer [/lib/libc-2.11.1.so]
   313,689,771      216   3,114  /home/pescio/gitroot/libgda/libgda/sqlite/sqlite-src/sqlite3.c:closeAllCursors [/home/pescio/gitroot/gitinstalled/usr/lib/libgda-4.0.so.4.1.0]
   313,342,932      125   2,006  ???:g_str_hash [/lib/libglib-2.0.so.0.2400.1]
   310,500,899    1,487   3,561  ???:g_malloc [/lib/libglib-2.0.so.0.2400.1]
   309,117,655   15,669     277  ???:symbol_db_engine_add_new_files_full [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
   303,310,690   15,445      44  ???:sdb_engine_add_new_db_file [/home/pescio/gitroot/gitinstalled/usr/lib/anjuta/libanjuta-symbol-db.so]
   289,175,457      256   3,783  ???:g_value_take_boxed [/usr/lib/libgobject-2.0.so.0.2400.1]


So, are you certain there are no optimizing opportunities of higher functions (top of the sort list)? I haven't studied the data in details, but it's worthwhile to explore there.
Comment 56 Naba Kumar 2010-06-25 10:38:35 UTC
Actually, I looked at the branch and I don't think cleanup is done yet. You still need to remove all the tuple things and use database API as it's supposed to be. You can find plenty of examples in sdb-model-*, sdb-query.c. Let's proceed like planned and agreed. First clean up, then macro-optimization and last micro-optimization.
Comment 57 Massimo Cora' 2010-06-27 20:45:57 UTC
(In reply to comment #56)
> Actually, I looked at the branch and I don't think cleanup is done yet. You
> still need to remove all the tuple things and use database API as it's supposed
> to be. You can find plenty of examples in sdb-model-*, sdb-query.c. Let's
> proceed like planned and agreed. First clean up, then macro-optimization and
> last micro-optimization.

sure I still have to do some modifications, but for time problems I preferred to have a quick benchmark test, just for curiosity. Next week I'll probably have more time and the code cleaning can go ahead.
Comment 58 Massimo Cora' 2010-07-02 23:24:38 UTC
(In reply to comment #56)
> Actually, I looked at the branch and I don't think cleanup is done yet. You
> still need to remove all the tuple things and use database API as it's supposed
> to be. You can find plenty of examples in sdb-model-*, sdb-query.c. Let's
> proceed like planned and agreed. First clean up, then macro-optimization and
> last micro-optimization.

I've completed the cleanup of core code on sdb-core-trans branch:

* removed unused/useless queries, 

* removed the sdb_engine_get_tuple_id_by_unique_name where possible. In some cases its usage is mandatory and cannot be avoided. See symbol_db_engine_project_exists (), ect.

* splitted sdb_engine_add_new_symbol () so that it's more readable. Used DB API where possible, reducing the number of direct select with libgda.

* sdb_engine_second_pass_update_heritage () has been temporary commented out because the scope thing should be taken into consideration (bug #615403) before to proceed.

* removed all tablemaps and flush logic.

* added a single transaction that, when populating the first time, performs only inserts. Some of those inserts have selects inside.

* sym_type has been merged into symbol. Scope remains separate.

What I see now: while I'm happy of the improved readability of the core, we're not quite ok with performances. We're still at 13 ms average per symbol and around 35-40 seconds for the full population.

I already tried removing/adding indexes, but the insertion queries are so simple that we can hardly improve there.
I'm lacking ideas for macro optimizations now. With the above changes the code has been cleaned up but the performances haven't improved neither 1 ms compared to the 'old' master without tablemaps.
It took me a while to accept that but the tablemaps weren't the right thing to do. My conclusion is then one: libgda is (maybe) too slow. To proof that I'll try to write sample programs to compare libgda and sqlite apis. Something like comment #23 programs should be fine.
What if libgda is really so much slower than sqlite? Well, I don't know, we would probably have to drop libgda calls in the core maintaining them on other modules.
Comment 59 Massimo Cora' 2010-07-08 22:48:21 UTC
With commits
http://git.gnome.org/browse/anjuta/commit/?h=sdb-core-trans&id=fcfc590238769c0ceba0e7bd03a0b98eb2bef69b

http://git.gnome.org/browse/anjuta/commit/?h=sdb-core-trans&id=d4777274dee81887497905145175354be6fd605c

http://git.gnome.org/browse/anjuta/commit/?h=sdb-core-trans&id=3af5a1e1b395a4d4bde71d3adba1ebcfca35b801

we got enough informations about libgda vs sqlite benchmarks.
In particular libgda performs, in the best case, 2 times slower compared to sqlite. 1.87 vs 0.80.

When libgda retrieves last_insert_rowid information than it becomes 10x times slower! 7.70 vs 0.80

Checks the bench programs for some tests.
When the sqlite3_last_insert_rowid (db) is used on sqlite side it's performance grows up only for some milliseconds, but outperforms libgda in any case.


It seems to me then that the real bottleneck is the last_insert_rowid thing.
Comment 60 Johannes Schmid 2010-07-09 06:55:59 UTC
Thanks for checking!

> we got enough informations about libgda vs sqlite benchmarks.
> In particular libgda performs, in the best case, 2 times slower compared to
> sqlite. 1.87 vs 0.80.

As said before, this looks like the GObject overhead and I don't worry too much about it. I mean, even if it takes 2 seconds to scan the whole project that is pretty difficult to notice.
 
> When libgda retrieves last_insert_rowid information than it becomes 10x times
> slower! 7.70 vs 0.80
> 
> Checks the bench programs for some tests.
> When the sqlite3_last_insert_rowid (db) is used on sqlite side it's performance
> grows up only for some milliseconds, but outperforms libgda in any case.
> 
> 
> It seems to me then that the real bottleneck is the last_insert_rowid thing.

The pretty much looks like a bug to me. libgda isn't getting any more information than sqlite so it shouldn't take so much longer IMHO. Can you point Vivien to that?
Comment 61 Johannes Schmid 2010-07-09 07:02:01 UTC
BTW:
 
> http://git.gnome.org/browse/anjuta/commit/?h=sdb-core-trans&id=d4777274dee81887497905145175354be6fd605c

"symbol-db: removed memory pool on libgda bench. Same timings."

I think that at least means you can drop the memory pool now, it's not faster. And it saves code size.
Comment 62 Massimo Cora' 2010-07-09 09:07:07 UTC
(In reply to comment #60)
> Thanks for checking!
> 
> > we got enough informations about libgda vs sqlite benchmarks.
> > In particular libgda performs, in the best case, 2 times slower compared to
> > sqlite. 1.87 vs 0.80.
> 
> As said before, this looks like the GObject overhead and I don't worry too much
> about it. I mean, even if it takes 2 seconds to scan the whole project that is
> pretty difficult to notice.
> 

well, actually the last_insert_rowid data is mandatory, so we must have it. That'll bring libgda to 10x times slower.
SQLite grows only 1-2 ms with sqlite3_last_insert_rowid (): that should also reflects on libgda, but surely it should not become 10x slower.



> The pretty much looks like a bug to me. libgda isn't getting any more
> information than sqlite so it shouldn't take so much longer IMHO. Can you point
> Vivien to that?

yep I notified him with bug #623891
Comment 63 Massimo Cora' 2010-07-09 09:17:13 UTC
(In reply to comment #61)
> BTW:
> 
> > http://git.gnome.org/browse/anjuta/commit/?h=sdb-core-trans&id=d4777274dee81887497905145175354be6fd605c
> 
> "symbol-db: removed memory pool on libgda bench. Same timings."
> 
> I think that at least means you can drop the memory pool now, it's not faster.
> And it saves code size.

yep I thought about this. Let's see first how the last_insert thing works out. If it's not fixable then I'll drop libgda from the core, preferring pure sqlite api: they're pretty straightforward to use.
A read only libgda-connection to db'll be left to permit symbol-db-query & co to continue working.
Comment 64 Massimo Cora' 2010-07-11 10:31:40 UTC
As we see from bug #623891 changing something on libgda brought the performance down from 7.70 to (here) 4.60 seconds.

Total time taken for Anjuta's db to populate is now 26 seconds, where before the patch it was 35.

As we see from Vivien's comment "However last_insert_rowid in Libgda actually retreives a complete row of data,
which is more than only getting a rowid, so I guess the SQlite test should also
include retreiving the complete row." we don't need the last complete row of data but just the rowid.
Maybe we can request if, on a future version, he can provide the only_rowid retrieving.

Now I ask you: are the timings acceptable? If yes then I'll drop even the memory pool system which, at this point, seems to not bring so much improvement. Then I'll merge the sdb-core-trans to master and proceed with other bugs.

If not the only way (now) is to use sqlite apis, which could definetely bring us down to 14-15 seconds for total population. Libgda'll continue living in a read-only connection for other modules as querying system etc. SQLite apis'll be used only in the core.

Let me know what you think.
Comment 65 Massimo Cora' 2010-07-12 22:31:24 UTC
Merged sdb-core-trans into master.
I've reached 24 seconds for total population, removing the memory pool too.
I think the core now is much cleaner. We should only wait for Vivien to fix libgda to gain some more seconds.
Comment 66 Naba Kumar 2010-07-16 09:19:00 UTC
Hi Massimo. Awesome! Sorry that I haven't been able to follow it up sooner. I will look at your new stuffs soonish and see if I have comments. Meantime, could you please get a profile data for the new changes (ala. comment #54)?
Comment 67 Massimo Cora' 2010-07-27 12:14:51 UTC
Created attachment 166638 [details]
callgrind output 2
Comment 68 Massimo Cora' 2010-07-27 12:15:36 UTC
Created attachment 166639 [details]
cachegrind output 2
Comment 69 Massimo Cora' 2010-07-27 12:19:10 UTC
here they are. They've been created using

valgrind --tool=cachegrind  --cachegrind-out-file=cachegrind.out  ./.libs/benchmark test-dir

and

valgrind --tool=callgrind --collect-systime=yes --callgrind-out-file=callgrind.out  ./.libs/benchmark test-dir
Comment 70 Sébastien Granjoux 2013-02-25 21:07:17 UTC
Is there something new on this bug or can we close it?
Comment 71 Massimo Cora' 2013-02-25 23:14:26 UTC
I suppose we can close it
Comment 72 Adam Dingle 2013-04-25 07:05:14 UTC
Sounds good.  Marking as fixed.