Thursday, April 07, 2005

Jason.OPF.2

SQL.com.my (3.5.78.366) says:
ask u one question
SQL.com.my (3.5.78.366) says:
about the unidirectional way of reading record u always recommend...
Jason says:
but don't know what type of data structure will be put in physical RAM and what type will be put in virtual memory
SQL.com.my (3.5.78.366) says:
if read one by one, will the speed slow?
SQL.com.my (3.5.78.366) says:
or can fetch batch by batch from DB server?
Jason says:
some DB allow batch by batch
Jason says:
but interbase/FB is one by one
SQL.com.my (3.5.78.366) says:
i see
Jason says:
but it doesn't matter
Jason says:
bcoz the difference is not big
SQL.com.my (3.5.78.366) says:
i see
Jason says:
if u read one by one, u issue more api commands, and we consider the execuation of api commands are very fast even for 1000000 times
SQL.com.my (3.5.78.366) says:
just now we try the cxgrid in unbound mode, load 100K rows with about 6 fields, takes about 4.8 seconds
Jason says:
if one record doesn't fill up the tcp packet, they will fetch the second record at the sametime
Jason says:
in IB/FB
Jason says:
one api call will result them to get as many records to fill in one tcp packet
SQL.com.my (3.5.78.366) says:
i see.
SQL.com.my (3.5.78.366) says:
so quite efficient too
Jason says:
what are the 6 fields?
Jason says:
what type?
SQL.com.my (3.5.78.366) says:
mostly string fields.
SQL.com.my (3.5.78.366) says:
all string fields.
Jason says:
ic, then it's quite efficient already
Jason says:
with unidirectional dataset u mean?
SQL.com.my (3.5.78.366) says:
tsqldataset
SQL.com.my (3.5.78.366) says:
the dbxpress
Jason says:
but even if u use clientDataSet -> provider -> TSQLDataSet, it's the same thing, they still read one by one
SQL.com.my (3.5.78.366) says:
then we try the provider mode, load the rows into a Tcollection, also 4.6 seconds too
Jason says:
unless u use IBX or IBObject then it's different
Jason says:
dbX + IB/FB always one by one
SQL.com.my (3.5.78.366) says:
so IBX more efficient?
Jason says:
no!!
SQL.com.my (3.5.78.366) says:
but the provider mode suffer from another problem.
SQL.com.my (3.5.78.366) says:
the sorting/filter , grouping all get slow
SQL.com.my (3.5.78.366) says:
very slow
SQL.com.my (3.5.78.366) says:
for the 100k rows case.
SQL.com.my (3.5.78.366) says:
the unbound mode very fast response for sort/filter.
Jason says:
IBXdataset itself is ClientDataSet + Unidirectional dataset, it will consume significant extra overhead. IBX unidirectional dataset is 2 times slower than dbX
SQL.com.my (3.5.78.366) says:
hah, i see
Jason says:
that's why I suggested u to do it in dbx + unboundmode without in memory dataset
SQL.com.my (3.5.78.366) says:
but i have another problem here.
SQL.com.my (3.5.78.366) says:
using the dbx+unboundmode can't perform OPF design
Jason says:
if u use ClientDataSet + provider + dbx to read all, then IBX read all is more efficient
Jason says:
why can't?
SQL.com.my (3.5.78.366) says:
i better avoid using the DA, because they use clientdataset too.
SQL.com.my (3.5.78.366) says:
since the row's data are keep in the grid's datacontroller.
SQL.com.my (3.5.78.366) says:
how to OPF ?
SQL.com.my (3.5.78.366) says:
the cxgrid unbound mode performance very consistent, say 100k takes n second, then 200k takes 2n second.
Jason says:
in provider mode, where are data kept?
SQL.com.my (3.5.78.366) says:
in provider mode, the data keep in my own opf data structure.
SQL.com.my (3.5.78.366) says:
but suffer the sort/filter operation
SQL.com.my (3.5.78.366) says:
i couldn't rely on dexex's for the data structure.
Jason says:
u need to do something to bridge unboundmode and ur opf data structure, it's the samething u need to do if u are using clientdataset etc
SQL.com.my (3.5.78.366) says:
it seems impossible.
SQL.com.my (3.5.78.366) says:
the bridge the devex provide is the customdatasource.
SQL.com.my (3.5.78.366) says:
if i keep two copies of data, then ram will be consume double, one for my OPF, one for cxgrid.
Jason says:
I don't really understand OPF
Jason says:
tell me
SQL.com.my (3.5.78.366) says:
there is one solution...
Jason says:
how do u read the 3rd record
Jason says:
for example?
SQL.com.my (3.5.78.366) says:
i implement my own TDataSet descendant.
SQL.com.my (3.5.78.366) says:
but still doesn't solve the problem.
SQL.com.my (3.5.78.366) says:
it will double up the ram too.
Jason says:
how?
Jason says:
let say TInvoice
Jason says:
if u one to read 3rd Invoice
Jason says:
then how?
Jason says:
Invoice.Row[3]?
Jason says:
or?
SQL.com.my (3.5.78.366) says:
yes.
SQL.com.my (3.5.78.366) says:
something like that.
SQL.com.my (3.5.78.366) says:
if we use the DB grid view, the cxgriddbtableview will cache it's own copy of data from the dataset right?
Jason says:
yes, if u use db grid view, there will be one copy in TDataSet, another copy in DataController
SQL.com.my (3.5.78.366) says:
ya, so using the customdataset solution doesn't solve the problem too.
SQL.com.my (3.5.78.366) says:
the RAM must be consume too.
SQL.com.my (3.5.78.366) says:
unless go to provider mode.
Jason says:
why don't Invoice.Row[3] return the datacontroller.row[3]? in this way u don't have to double-cache the data?
SQL.com.my (3.5.78.366) says:
yes, i think that too.
SQL.com.my (3.5.78.366) says:
but the datacontroller.rows[3] structure doesn't fit the OPF datastructure i want.
SQL.com.my (3.5.78.366) says:
we have to keep track the oldvalue and newvalue for updating later.
SQL.com.my (3.5.78.366) says:
the datacontroller don't have this function.
SQL.com.my (3.5.78.366) says:
but if i do that in browse grid, that is ok.
Jason says:
I think u just need another inner structure called Delta
SQL.com.my (3.5.78.366) says:
yes, i need that.
Jason says:
so I think u don't need to cache data twice at all
SQL.com.my (3.5.78.366) says:
i am stuck here.
SQL.com.my (3.5.78.366) says:
seems like no good solution .
SQL.com.my (3.5.78.366) says:
but the delta structure u mentioned show a hint to me...
SQL.com.my (3.5.78.366) says:
SQL.com.my (3.5.78.366) says:
i am stuck here.
SQL.com.my (3.5.78.366) says:
seems like no good solution .
SQL.com.my (3.5.78.366) says:
but the delta structure u mentioned show a hint to me...
Jason says:
so u still need to cache data twice?
SQL.com.my (3.5.78.366) says:
not so fast to commit yet.
SQL.com.my (3.5.78.366) says:
perhaps i need to think the delta structure first.
Jason says:
sorry was disconnectec
SQL.com.my (3.5.78.366) says:
it's ok.
SQL.com.my (3.5.78.366) says:
by using the datacontroller to store the data. then i need another mechanism to read/write the data to my OPF structure.
SQL.com.my (3.5.78.366) says:
the OPF is the only way to cross database. i think.
Jason says:
not necessary
Jason says:
but it's a way to cross database
Jason says:
but only well-designed OPF can cross
Jason says:
it might not cross either
SQL.com.my (3.5.78.366) says:
yes
Jason says:
uni directional is not always the fastest
Jason says:
the behavior varies among db servers too
SQL.com.my (3.5.78.366) says:
but with OPF, i don't need to worry
SQL.com.my (3.5.78.366) says:
if say the firebird + IBX give the best performance, then i can use that way to access.
Jason says:
so in OPF u still have SQL or not?
SQL.com.my (3.5.78.366) says:
yes, i still have SQL.
SQL.com.my (3.5.78.366) says:
i only pass sql string to the persistence layer.
Jason says:
then if there is difference between SQL syntax then how?
SQL.com.my (3.5.78.366) says:
the persistence layer will interpret the sql statement.
Jason says:
with QG datacontroller, u don't have to worry about the fieldtype already
SQL.com.my (3.5.78.366) says:
i don't have time for that now. if i do the data entry part, then most sql will be able to cross.
SQL.com.my (3.5.78.366) says:
unless doing report, which require diff way of calculation.
Jason says:
I think fieldtype is Delphi's weakest db design
SQL.com.my (3.5.78.366) says:
i have examine the data entry part, mostly use insert/update/delete in simple way.
SQL.com.my (3.5.78.366) says:
only the report will be diff.
SQL.com.my (3.5.78.366) says:
because the entry part do work on bulk basis.
SQL.com.my (3.5.78.366) says:
do -> don't
Jason says:
yeah entry is simple, that's how dataprovider can generate the SQL regardless of what db server used
SQL.com.my (3.5.78.366) says:
yes.
SQL.com.my (3.5.78.366) says:
but i couldn't pass the SQL string directly to the persistence layer to interpret. there is another problem that block it.
SQL.com.my (3.5.78.366) says:
it is the blob field.
Jason says:
yeah
Jason says:
u need to have parameter
SQL.com.my (3.5.78.366) says:
yes. i have to rely on that
Jason says:
and then paramter.asBlob = something
SQL.com.my (3.5.78.366) says:
yes
SQL.com.my (3.5.78.366) says:
so infact, i pass sql string + tparams to the persistent layer to interpret.
Jason says:
or u just pass it to one-record clientdataset + provider, and ask them to resolve for u
Jason says:
?
SQL.com.my (3.5.78.366) says:
can't if i use my own data structure.
SQL.com.my (3.5.78.366) says:
the clientdataset has their own delta structure
Jason says:
u can use ur own data structure for bulk
Jason says:
but when u are going to persist it, then only pass it to a clientdataset which contain one record only
SQL.com.my (3.5.78.366) says:
clientdataset suffer the slow problem if rows grow.
Jason says:
in this case the clientdataset is not going to contain every row
SQL.com.my (3.5.78.366) says:
is that a good way?
Jason says:
every row is still in datacontroller
SQL.com.my (3.5.78.366) says:
if i have 1000 rows of invoice items, i need to pass 1000 times to clientdataset to generate the sql for me
Jason says:
only when u want to resolve, pass the ROW of INTEREST into clientdataset and ask them to do the job for u
SQL.com.my (3.5.78.366) says:
i think i generate my own sql faster than clientdataset.
Jason says:
it's not the most efficient way, but can save u time on coding. And u can totally be independant of clientdataset, when u have time later to do it
Jason says:
bcoz at the moment, u just want to see if ur OPF works
Jason says:
so u can ask clientdataset to do it for u first
Jason says:
then u can enhance it later when u have more time
SQL.com.my (3.5.78.366) says:
the clientdataset need both old value and new value in order to able to generate sql.
Jason says:
if u want to optimize every small part first, then it will take u forever to finish a framework
SQL.com.my (3.5.78.366) says:
haha
SQL.com.my (3.5.78.366) says:
u r right too.
Jason says:

Jason says:
bcoz I think the SQL generation looks easy but still take certain amount of time
Jason says:
especially the BLOB part
SQL.com.my (3.5.78.366) says:
should be ok.
SQL.com.my (3.5.78.366) says:
i have do a brief study , the insert and delete is the most easy
Jason says:
but if u are confident with it, then no problem
SQL.com.my (3.5.78.366) says:
the update will need to traverse the changes field in order to generate better sql .
Jason says:
yeah
Jason says:
when u add a record to clientdataset, and mergechanges, they will all become old value, then u edit the field, then that's new values
Jason says:
then applyupdate
Jason says:
that's it
SQL.com.my (3.5.78.366) says:
yes
SQL.com.my (3.5.78.366) says:
i still doing study on the OPF... hope can come out something.
SQL.com.my (3.5.78.366) says:
i need to go now. nice talk tot u. tonight will play sport.
Jason says:
ok
Jason says:
bye
SQL.com.my (3.5.78.366) says:
bye

0 Comments:

Post a Comment

<< Home