*** ajmitch has quit IRC *** havoc has quit IRC *** ajmitch has joined #gnuenterprise *** mnemoc has quit IRC *** btami has joined #gnuenterprise *** dcmwai has joined #gnuenterprise *** kilo has joined #gnuenterprise *** sjc has joined #gnuenterprise *** dcmwai has quit IRC *** dcmwai has joined #gnuenterprise *** sjc has quit IRC *** sjc has joined #gnuenterprise *** nickr has quit IRC *** reinhard has joined #gnuenterprise *** dcmwai has quit IRC *** johannesV has joined #gnuenterprise hello again ... :) good morning Austria hi kilo johannesV! the return of the king! :-) :) i've already read all those irc-logs ... where have you been? it was quite a lot i had an operation ... oh and my nose is still *big* and the doctor told me not to work at my computer for about a week ... :( but i'd say about half an hour of gnue is better then the medicine ... :) whisky is the only medicine for any disease 8-))) :) i'm off for today .. bye *** johannesV has quit IRC *** holycow has joined #gnuenterprise *** reinhard has left #gnuenterprise *** reinhard has joined #gnuenterprise *** havoc has joined #gnuenterprise bbl *** btami has quit IRC *** havoc has quit IRC *** havoc has joined #gnuenterprise *** mnemoc has joined #gnuenterprise *** johannesV has joined #gnuenterprise hm, found something weird in GDataSource/appserver.data ... we create a lot of 'uncollectable' garbage with DataSourceWrapper () instances .... hmmm, found something weird in Designer code... no, the whole Designer code is weird 8-))) lol well, in data.py we create instances of DataSourceWrapper which won't be available anywhere but these instances do have references to objects which are available erm DataSourceWrapper is a function :) not a class *** sjc has quit IRC and returns a DataSource object but it creates an instance of _DataSourceWrapper yes that are more or less normal DataSource instances bah (_DataSourceWrapper is a descendant of GDataSource) correct look at all that sequences and dictionaries defined in a GDataSouce ... all these migth contain references ending up in ref-cycles s/migth/might/ and, in fact, there are a lot of such cycles ... wasn't that exactly what we took care of when we did the first memory leak fix round? no, i don't think so there we fixed closing result- and recordsets so did anything change about the number of DataSources we create? lekma claims that the version of february 12 is memory leak clean i'm not sure, but i'd say no i did the following: geasRpcServer got anohter signal-handler (sigusr1) if it catches this signal, gc is asked to collect everything all 'uncollectable' stuff get's dumped into a file so after starting the appserver and sending him a sigusr1 (without anything in between) i get a quite huge file, holding a lot of datasource-stuff ok after starting a script (against appserver) the file grows the diff between the files would be interesting and - now it get's interesting - it has another big bunch of datasouce-stuff with all that things the script does right :) well this might become a bit complicated to me it looks like all data fetched by a datasource remains in memory (cause it get's somehow unreachable) reinhard, true, it already *is* complicated .... but could you save your current diff (adding the SIGUSR1 handler) to a file then reset to 12 february version apply that SIGUSR1 patch to that version and see wheter it already was that bad at that time? hmm, one major problem with a diff are those memory addresses ... i can do a diff of two logs from the same process ... sorry misunderstanding i meant do svn diff > file svn update -R patch < file then test again I would like to verify the claim that there was no memory leak at 12 february of course not demanding anything, just proposing, knowing you are officially still "out of order" for this week hmm, i can revert the gnue code on my boston to rev 7018, so i do not need to diff my current svn i'm working on my notebook.... and i'm in bed ... :) so the doctor can't say anything *lol* johannesV: but you would need the SIGUSR1 code to test, wouldn't you? of course, but that's about 5 lines :) uh sounded much more complicated :) ok, maybe 10 :) johannesV: gotta love your wlan, don't you? ;-) *** mnemoc has quit IRC reinhard, i have a rj45 box in every room, so i don't need the wlan :) currently i'm using a wire ... it's faster :) wow wi-fi rules i remember people on the university who worked in the microwave labs were all fat, bold, and had only daughters... 8-)))) hrmm * reinhard has only daughters johannesV, has both :) kilo: what are you implying? *** jamest has joined #gnuenterprise nothing. but the less waves around the better you are... pfft, what doesn't kill you only makes you stronger ;) reinhard, same leak seems to be in rev. 7018 chillywi1ly: lol. whisky is the clue 8-)) maybe I should go into the office now coding in your underwear is so productive though ;) go to work already why? all they do is call me and ask stupid questions there? and is coding in your underwear in your office productive? s/there?/there/ ok, rev 7018 has the same number of unreachable objects/instances as the current svn has chillywi1ly: and is coding in someone else's underwear productive? 8-)) strange well, i don't think it's strange ... i think we should solve this problem in common *** mnemoc has joined #gnuenterprise this way remaining unreachables are easier to find :) i think a datasource should have some mechanism to get 'cleared for removal' kilo: probably not ;) where all internal refs (to resultsets, field-collections, listeners and the like) are cleared so no cycles remain johannesV: maybe the other way around could be easier checking which other object still references the DataSource well, i can try to find this out (maybe cyclops is helpful for this ...) # objects in root set: 1 # distinct structured objects reachable: 92 # distinct structured objects in cycles: 36 # cycles found: 20 # cycles filtered out: 0 # strongly-connected components: 2 # arcs examined: 638 this is the dump of a *single* datasouce returend in data.py :) it's the DataObject having a pointer to the datasource 2-element cycle 0x40ac956c rc:2 instance gnue.common.datasources.drivers.postgresql.Base.DataObject.DataObject_Object repr: this._dataSource -> 0x40a9182c rc:19 instance gnue.common.datasources.GDataSource._DataSourceWrapper repr: this._dataObject -> 0x40ac956c rc:2 instance gnue.common.datasources.drivers.postgresql.Base.DataObject.DataObject_Object repr: bbl *** kilo has quit IRC looks like a GDataSource.close() would indeed make sense maybe we can break the link in an existing close that sets _dataObject to None like in Base/ResultSet.close () where _dataObject is set to None just before that we could set _dataObject._dataSource to None to make sure we clear the link no I don't think we may do that we can close the resultset and still want to keep the datasource and the dataobject hmmm... I think.... but I'm not entirely sure bad thing is in data.py we don't have the datasource... right, as i've said before we have no ref to the datasource we create resultsets of there are times when I think that gnue-commons datasource system is unnecessarily complex and bloated but that's only when I have a bad day I think ;-) *lol* johannesV: I think we can do it like you said hmm, i've said a lot of things .... :) johannesV: like in Base/ResultSet.close () johannesV: where _dataObject is set to None johannesV: just before that we could set _dataObject._dataSource to None ah well, but this would mean if there are two resultsets of the same datasource we might get into troubles ... don't we ? *can* a datasource have more than one resultset? not in our case (appserver) but if i think of trigger-code in forms it might alternatively we could keep track of all datasources created and in data.connection.close () we could resolve the links we could in data.py where we call the resultSet.close() also do a resultSet._dataObject._dataSource.close() actually I would wonder if a resultset can survive without a datasource that would be the nicest thing then we could kill both datasource and dataobject immediately after creating the resultset right i think there are some triggers fired (like on-new-record and the like ... ) *** jcater has joined #gnuenterprise yes and dataObject is used quite often AFAICT then again I think nobody except appserver calls ResultSet.close() anyway I think resultset.close() was added as appserver developers didn't like the warning messages :) we just haven't gotten around to using the function in the other tools yet :-/ jcater: no, resultset.close() was created to clean up memory leaks or, better put, cyclic object references that would otherwise result in memory leaks I must be thinking of connection.close() then yes, that was that ok, if i add a reference-breaking code into Base/ResultSet.close () some cyclic refs are removed but there are still 116 left (after appserver's startup) so we have other cycles as well ? right a lot of them :) question is a lot of different ones or a all of the same kind :) there are instances of DBSIG2.ResultSets as well as a lot of dicts and sequences it's hard to tell ... though and a lot of bound methods !?? will try something different .... (adding a close () function to GDataSource) wohoo .. only 45 left ok, now all unreachable are gone (after appserver's startup) reinhard, if we keep track of all datasources created in data.py and deleting them in connection.close () there's no need of a change in common but, we'd have to keep track of datasources per connection (data.py) -instance anyway, if i execute 'dirty.py' (in appserver's sample/testing) 2003 unreachable instances are left (after sending a sigusr1) so there must be other leaks in there too johannesV: to delete the datasource instances on connection close seems a bit late to me there might be connections open for several days so I'd prefer to delete the datasource whenever the matching recordset is deleted *** titopbs has joined #gnuenterprise well, so we still need to keep track of the datasources created yes of course even more detailed as i see things we should 'use' the datasource instead of the resultset in data.py instead of 'silently dropping' it maybe yes so not work with the resultset but with the datasource like _createEmptyResultSet () function ... it creates another datasource and returns only a resultset yes actually my understanding is that you should be able to do everything you want through the datasource not caring about all the different objects lying below *** johannesV_ has joined #gnuenterprise bbl *** johannesV has quit IRC *** holycow has quit IRC *** nickr has joined #gnuenterprise *** johannesV_ has quit IRC *** wendall911 has joined #gnuenterprise *** dcmwai has joined #gnuenterprise *** holycow has joined #gnuenterprise *** dcmwai has quit IRC *** sjc has joined #gnuenterprise *** btami has joined #gnuenterprise *** johannesV has joined #gnuenterprise *** johannesV has quit IRC *** kilo has joined #gnuenterprise *** btami has quit IRC night all *** reinhard has quit IRC *** kilo has quit IRC *** jcater has quit IRC *** jamest has quit IRC *** titopbs has quit IRC *** nickr has quit IRC *** nickr has joined #gnuenterprise *** jcater has joined #gnuenterprise *** jcater has quit IRC *** jamest has joined #gnuenterprise *** sjc has quit IRC *** tiredbones has joined #gnuenterprise *** holycow has quit IRC *** titopbs has joined #gnuenterprise *** holycow has joined #gnuenterprise *** holycow has quit IRC *** holycow has joined #gnuenterprise *** wendall_away has left #gnuenterprise *** holycow has quit IRC *** holycow has joined #gnuenterprise *** havoc has quit IRC *** havoc has joined #gnuenterprise *** holycow has quit IRC *** titopbs has quit IRC *** dcmwai has joined #gnuenterprise *** tiredbones has quit IRC