Skip to main content

approaching master detail tables

107 replies [Last post]
shahbaz
Offline
Joined: 2005-01-16

I need to do a master-detail (containing two tables, clicking on the top one changes the content of the bottom table).
Previously I've done it by having one tablemodel for the master table, listening to a mouse click, retreiving the right data, looking up that data in a hashmap, giving the detail tablmodel access to the new datasource and calling a datachanged event.
It looks like it should be much simpler to do such a thing using filters...but I don't see a way of dynamically inserting/removing filters...only adding a whole filterpipeline.
Secondly, could the treetablemodel have some relevance here?
Finally, do I still have to listen for clicks in the master table, retreieve row information, get the cell that contains the parameter used by the detail table, etc.?
Thanks.

By the way, I accidently discovered the ColumnControlButton...something I was dreading having to implement...wonder what else is hidden in there :)

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
> Alright, I fax the JCA over Monday, as I am out of the office today. That will give me some time to put somethings together as well.
>
> Thanks!

Wohooo! :-D

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
>>Ok, here is a problem with the current rbair's
>>DataModel-DataSource
>>combo that I see after thinking about your posts:
>>
>> DataModels are not interchangeable
...
> You nailed it. This is one of the things that has been bugging me
> about the way I have DataSources and DataModels arranged. The other
> thing that I don't like is that you _have to know_ what kind of
> DataSource you have so that you can use a specific DataModel for it. For
> instance, a JavaBeanDataModel doesn't mix with a RowSetDataSource. There
> isn't any API to enforce this, you just have to know. Blah, there has to
> be a better way!
...
> Lets say that we have an object called DataStoreConnection which will
...
> Hmmmm..... what do you think?

I understand your frustration with the difficulty of refactoring this
good stuff into something better, as most of the problems it has are
sensations that are not easily explainable.

I'd propose you to work on the reorg part of the incubator, under the
binding project, that has only the binding code (that I somewhat
refactored to get a better hang of it).

In this way I can also start having some feedback and bugging on the
reorg and we can work together on that.

WDYT?

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

> I'd propose you to work on the reorg part of the
> incubator, under the
> binding project, that has only the binding code (that
> I somewhat
> refactored to get a better hang of it).
>
> In this way I can also start having some feedback and
> bugging on the
> reorg and we can work together on that.
>
> WDYT?

Sounds great! I just found the reorg projects, sorry for not being able to check them out sooner! Would you prefer that we hack up what is already there, or create a separate "data2" package actings as an unofficial kind of branch?

Richard

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
...
>>WDYT?
>
> Sounds great! I just found the reorg projects, sorry for not being
> able to check them out sooner!

:-)

> Would you prefer that we hack up what is
> already there

Definately!

BTW, there are quite some refactorings in some packages, and some things
may seem strange to you, as dependencies have forced me in the current
state of code, so don't hesitate to ask.

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

> > Would you prefer that we hack up what is
> > already there
>
> Definately!
>
> BTW, there are quite some refactorings in some
> packages, and some things
> may seem strange to you, as dependencies have forced
> me in the current
> state of code, so don't hesitate to ask.

Ok. I'll submit what I have done at the end of the day for you to check out. If I have any questions, I'll fire up a new thread for them.

Richard

Nicola Ken Barozzi

Richard wrote:
...
> Ok. I'll submit what I have done at the end of the day for you to
> check out. If I have any questions, I'll fire up a new thread for them.

If you are committing anything in the binding stuff, don't worry, just
go ahead and do it as frequently as you prefer. I tend to commit very
often, as it can show other developers why I do - or undo - some steps.

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

wsnyder6
Offline
Joined: 2004-04-20

This thread is growing to epic proportions :)

Lots of good stuff to chew on Rich...now where to begin..

> 1) The DataStoreConnection is the only class that
> knows about the Data Store in any way
>
> 2) A single query can be reused by multiple
> DataModels
>
> 3) We only need a single DataModel implementation
> because it no longer knows what kind of data it
> encapsulates
>
> 4) DataSource and Query implementations are hidden by
> the DataStoreConnection -- the developer doesn't care
> about the actual implementation
>
> 5) DataStoreConnections can be replaced without
> changing any DataModel/Binding code -- except for the
> caveat that the new DataStoreConnection must have
> queries with the same names as the old
> DataStoreConnection or some DataModels will not be
> populated with data
>
> Its also important to realize that the role that the
> DataSource would be playing here is an implementation
> detail, and not something that the average developer
> would be concerned with. The average developer would
> create a DataStoreConnection, a couple of queries,
> and then a couple of DataModels.
>
> Hmmmm..... what do you think?
>
> Richard

I think this is a good direction to take.

Regarding Points 1 and 2:
This is what I wanted to get across since my initial post on the DataSource/DataModel coupling. (Sometimes I am not very good explaining things :) )

My team is working on a light JDNC-based framework for rapid UI development within our company. The approach we took loading/persisting is [i]very[/i] similar to what you have with the DataStoreConnection idea. The main difference being that the load/save DataModel actions are not part of the DataSourceConnection, but rather their own Action-controller object.

The main reason being we have no need for DataModels to auto-load or auto-save themselves. The developer just creates the Action to load/save and call it where appropriate. This approach allows for easier (at least for us) transaction handling.

Gosh, all this discussion is so good. I wish we could all get in a room somewhere and hash this out....:)

--Bill

rbair
Offline
Joined: 2003-07-08

> This thread is growing to epic proportions :)

Long live the thread!

> Regarding Points 1 and 2:
> This is what I wanted to get across since my initial
> post on the DataSource/DataModel coupling. (Sometimes
> I am not very good explaining things :) )

Nah, that's what got me thinking :). It was one of those things bothering me too, I just didn't see any way around it at the time. Sometimes a good vacation helps clear the mental blocks.

> My team is working on a light JDNC-based framework
> for rapid UI development within our company. The
> approach we took loading/persisting is [i]very[/i]
> similar to what you have with the DataStoreConnection
> idea. The main difference being that the load/save
> DataModel actions are not part of the
> DataSourceConnection, but rather their own
> Action-controller object.

Where do you guys store the Action-controller object? Is a new one instantiated and then used when needed, or do you have a repository of sorts to store these things in?

Hmmm... I have a list of Query objects that are used for populating a DataModel. But there isn't any reason why there couldn't be a more generic "action" object like your are talking about that doesn't necessarily have to return a result.

In fact, such a thing could be used for executing a stored procedure or update query that doesn't necessarily return any results other than a success flag/update count. I'm afraid of gold-plating here, but I can see how it could be done, and without being hacked on.

> Gosh, all this discussion is so good. I wish we could
> all get in a room somewhere and hash this out....:)

Ya, I know what you mean. If you're in the Bay Area...

Richard

wsnyder6
Offline
Joined: 2004-04-20

> Where do you guys store the Action-controller object?
> Is a new one instantiated and then used when needed,
> or do you have a repository of sorts to store these
> things in?

We store them in the JDNC ActionManager; they are instantiated and used when needed. DataModels and other necessary objects are handed to the Action using the action.setValue method.

--Bill

Message was edited by: wsnyder6

rbair
Offline
Joined: 2003-07-08

Hey Bill,

I have another question for you about the action objects you are using to populate the DataModels. Are any of those objects parameterized? If so, how do you handle the situation where you have multiple DataModels using the same Action, and they all set params that are unique to them?

For example, say I have an Action running some sql statement like "select * from customer where custid=:custid". The :custid is the parameter. I have 2 DataSets tied to that action. The first DataSet sets the param to be "3", and executes it. While it is executing (on the background thread, of course), the second DataSet sets the param to "5". Memory stomping ensues :)

There are a couple of approaches. I suppose one of them would be to have a method on the action like "run(DataSet ds, Params p)" and let it be synchronized appropriately.

Richard

wsnyder6
Offline
Joined: 2004-04-20

> Hey Bill,
>
> I have another question for you about the action
> objects you are using to populate the DataModels. Are
> any of those objects parameterized? If so, how do you
> handle the situation where you have multiple
> DataModels using the same Action, and they all set
> params that are unique to them?
>
> For example, say I have an Action running some sql
> statement like "select * from customer where
> custid=:custid". The :custid is the parameter. I have
> 2 DataSets tied to that action. The first DataSet
> sets the param to be "3", and executes it. While it
> is executing (on the background thread, of course),
> the second DataSet sets the param to "5". Memory
> stomping ensues :)
>
> There are a couple of approaches. I suppose one of
> them would be to have a method on the action like
> "run(DataSet ds, Params p)" and let it be
> synchronized appropriately.
>
> Richard

Hi Rich,

We use plain javax.swing.Action implementations. They are parameterized in as far as we configure them thru the get/setValue methods.

As far as synchronization, we are not handling that in a very robust way at the moment; we synchronze sparingly. Our implementation differs in that we dont tie the Action(or Task) directly to the DataModel/DataSet. So any GUI component can use the action, pass in parameters (usually DataModels) and fire away. All DataModel manipulation is controlled to minimize deadlock.

Can I send you some code examples? At the very least, it will give you one idea of how the DataModel api is being used in the trenches...

--Bill

wsnyder6
Offline
Joined: 2004-04-20

Rich,

Would it be better if I placed some things in the incubtor?

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
> Would it be better if I placed some things in the incubtor?

That would be really nice! :-)

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

wsnyder6
Offline
Joined: 2004-04-20

Alright, I fax the JCA over Monday, as I am out of the office today. That will give me some time to put somethings together as well.

Thanks!

--Bill

Kleopatra

Hi Bill,

> Alright, I fax the JCA over Monday, as I am out of the office today.
> That will give me some time to put somethings together as well.
>

that's great!

Jeanette

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

Hi again, Aim. I want to address the first part of the first question in a little more detail.

> - The detail panel doesn't appear to support
> editing/storing of changes to the fields;
[snip]

The first and primary reason for this is that I haven't yet wrapped my head around the persistence problem. The demo was primarily to show the navigational/refresh functionality of the DataModels, especially as related to a true master/detail scenario. Now that I'm fairly satisfied with the loading and navigational aspects of the DataModel API, I'm ready to delve into the persistence problem.

First, here's how I'm currently thinking of the commit process. First, through some action (either taken by the user or programmatically), some data is loaded into a DataModel. That DataModel then notifies bound components, and they load their respective values from the DataModel. At this point, the user edits the "firstName" field, and changes focus (by means of a tab, or mouse gesture) to another component.

At this point, is the data saved to the DataModel, or do we wait for an explicit command such as when the user hits a "save" button? For the sake of argument, lets say that we save the user's change to the DataModel as soon as possible, in this case, after the firstName text field looses focus.

This would be what I call the first commit, or the first stage commit. The data is committed to the DataModel.

The DataModel could be part of a larger transaction, in which case it may want to notify other DataModels that a certain field was changed (for the reasoning behind this, consider that in my demo if you click on "Quarterstaff" then the table in the "Details" section also contains the text "Quarterstaff", although it is part of a different DataModel. If the JList allowed you to change the name of the items, then the table would be out of sync). Also, remember that this transaction might be distributed. Ok, now things are starting to get a little complex.

At some time in the future, the user might click "refresh", or "undo", or "rollback". If the user chooses "refresh", then the DataModel is reloaded, and all changes are lost, but the transaction is still valid. If the user chooses "undo", then the last change is undone, and the transaction is still valid. If the user chooses "rollback", then the transaction is ended and a refresh is performed.

If the user instead chooses "save" or "commit", then the transaction is commited and the changes are saved to the underlying data store. This would be what I'd call the second stage commit.

It is possible that some application would want to propogate changes to the datastore immediately and let the datastore worry about the transactions and enforcing them (such as with an in-memory database). This should also be possible, I guess.

Oh well, the ramblings of Friday night :). It seems like transactions, undo/redo and persistence are all intertwined and need to be considered together.

Richard

Amy Fowler

Patrick Wright wrote:
>
> There is one area related to "transactions" which deals with keeping
> multiple changes to a database in sync by treating them as a single unit
> of operation. That usually involves some short-term locking of the
> affected database structures, which may include changes to table rows
> (for insert/update/delete) as well as indexes. There are different
> strategies by which this is implemented, each having different
> performance and other characteristics (lock table, lock rows, use
> snapshots, allow dirty reads, etc).
>
> There is a separate question of concurrent access to data, namely,
> making sure that changes by one person don't conflict with those of
> another.

>
> 2- concurrency controls can be automatically applied, at least for SQL,
> but require a significant number of concessions by the developer.
> PowerBuilder did this for example, the techniques are well known. my
> suggestion there is that it is out of the scope of JDBC at this time.
>

Patrick - I'm in complete agreement that DB transaction mgt is out of
JDNC's scope. This is one reason why I've been advocating explicit
exposure of the JDBC RowSet (vs. some JDNC abstraction) when the app is
dealing with an SQL database. Obviously there are aspects of handling
DB transactions that trickle to the view space, namely query/update actions,
error-handling, etc. But the details should be left to the RowSet &
the JDBC drivers, the latter of which are typically written by the DB
vendors who know what they are doing.

Aim

p.s. Happy New Year to all -- I'm not back from maternity leave, but
trying to pay attention when life cooperates :-) Bino, Rich, &
Jeanette remain in charge!

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

Patrick Wright

Amy

Thanks for the response.


> Patrick - I'm in complete agreement that DB transaction mgt is out of
> JDNC's scope. This is one reason why I've been advocating explicit
> exposure of the JDBC RowSet (vs. some JDNC abstraction) when the app is
> dealing with an SQL database. Obviously there are aspects of handling
> DB transactions that trickle to the view space, namely query/update
> actions,
> error-handling, etc. But the details should be left to the RowSet &
> the JDBC drivers, the latter of which are typically written by the DB
> vendors who know what they are doing.

I mostly agree, or rather, agree with an amendment. From what I can
tell, there are a series of development tasks that center around the
View/Model layer and a separate set of tasks around persistence.
Hopefully, the View/Model synchronization should be good enough that
when the Model is synchronized, the Model/Persistence mapping can take
over, in whatever form it is.

In PB, these were sort of merged in earlier releases (the DataWindow
object presented a view, managed the model, and generated SQL), but as
we were developing pure client-server apps against relational dbs it
made life easier.

For JDNC, I am glad that Rich and others are keeping transactions in
mind, so that the design of the Model doesn't hinder the 'plugability'
of persistence mechanisms on the back end.

Taking a cue again from PowerBuilder, as far as persistence was
concerned, when changes were made to the Model, the DataWindow kept
track of rows which were new, updated, and deleted. You could also
programmatically access these (or modify those flags). If changes were
successfully merged to the database, the default behavior was that the
Model would reset internally, and inserted/deleted/updated rows (and
corresponding flags) would be reset, so that the Model showed a single,
non-modified view of the data (reflecting the probable current contents
in the database, without a reload).

Changes were by default sent to the database synchronously (one row at a
time). If the changes caused an error, submission of more rows stopped,
any rows updates that did not cause an error had their flags reset, and
1 or more rows would still be marked as inserted/updated/deleted.

This was a pretty flexible approach. You could trap the save invocation
and submit all calls in a batch, or map to a stored procedure call, etc.
or just use the default behavior.

In JDNC, the persistence mechanism could 'plug in' to the 'back' of the
Model. Implementations could then be written for various data sources.

As far as concurrency is concerned, in PB this was done through metadata
and policies. The policies (all 'optimistic' concurrency, FWIC) were set
per-query (which established the updatable row set), so that, for
example, you would match the WHERE clause against the primary key only,
key + timestamp, key + modified columns, or key + all original columns.
If someone had changed the row since your last retrieval, your update
would affect 0 rows, and thus be considered an access conflict. Thus you
had a range of options for addressing concurrent access conflicts. The
CRUD query was just built different at the time a save initiated.

Returning to your comment briefly--I am OK if JDNC separates the
concerns as far as design is concerned. But IMO it would be nice to bump
Java up the food chain a little to have some chance of competing with
FileMaker and Access, which have super-easy db-bound form building out
of the box, with reasonable defaults.

Regards
Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

Hey Oliver,

> Concerning client side and server side transaction I
> would partially disagree with Richard's lost post,
> although IMHO it is a tough question.
>
> If editing data inside a GUI is done inside a server
> (database) transaction many databases read lock the
> data until the transaction is ended. In many cases a
> whole table or at least a large or frequently
> accessed page is locked. Consider all this is for
> entering a new customer for an insurance company and
> the whole user table is blocked for other entries
> until the guy filling in the GUI fields finishes
> his/her work. Maybe going to lunch before?
>
> I do not think this will work practically. Maybe it
> would be a better idea to read the data in one
> transaction and write in another. Maybe do a sane
> check before writing back the changes, e.g. if the
> user has been deleted or modified by someone else.

You're right about it being a tough situation. Back in the day I used Borland's InterBase, which did a great job of record level locking. In my next position we used Sybase, which at the time was only doing table level locking. Transactions are next to useless in that situation.

You do have several different transactional levels that can be set on a Database via JDBC (assuming the driver and database support them, of course), including read_committed. I know that FirebirdSQL and PostgreSQL support the concept of multi-version concurrency which will handle this kind of situation very well.

In general I think its better to leave the implementation details of transactions up to the data store for several reasons. First, the data store is most likely robust enough to deal with these situations (most RDBMS systems I've used would qualify). Second, some data stores or application scenario's will want to notify other users of changes to uncommitted data (JDBC has read_uncommitted, for example, and PostgreSQL supports a rudimentary system of push notification of changes to a table, I believe). Third, some applications will want to allow the first user that opens a customer record to be able to edit the record, and give everybody else a read-only copy. Again, this requires that the data store know that a transaction has been started.

However, a situation like you described where the table is locked in a long-winded transaction is going to happen, so how do we deal with it? One approach would be to set the transaction to auto-commit (which is available through the JDNC API). With this approach you don't have any sanity checking -- but that might indicate a need to have a custom DataSource (or in the other response a DataStoreConnection), or perhaps by the use of stored procs on the Database.

In any case, in many scenarios I can see communicating with the server as essential. Like you say, though, transactions are a very difficult problem.

Richard

Patrick Wright

It seems (from lurking) that there are different issues being tossed
together here, which is confusing things (for me). To restate (and
possibly clarify...? sorry to be boorish)

There is one area related to "transactions" which deals with keeping
multiple changes to a database in sync by treating them as a single unit
of operation. That usually involves some short-term locking of the
affected database structures, which may include changes to table rows
(for insert/update/delete) as well as indexes. There are different
strategies by which this is implemented, each having different
performance and other characteristics (lock table, lock rows, use
snapshots, allow dirty reads, etc).

There is a separate question of concurrent access to data, namely,
making sure that changes by one person don't conflict with those of
another. For example, I update a person's street address while you are
updating the entire address--whose change applies? I think this is
called "concurrent access conflict", or at least the word "concurrent"
is in there.

You may solve both problems at one time. For example, locking all
affected tables as soon as a row is read until edits are complete is a
solution, but pretty drastic. In my experience, though, the solution's
design depends on the exact specifications of the desired result.

You might say: once one user begins to make _any_ changes to a record,
then they have exclusive access to the record until changes are complete
and the record is released.

You could specify differently: if two users try to change the same
record (that they have both read at the same time), the requests will
always be applied synchronously. Once a change has been made, the second
of the two requests can be applied iff none of the updates in the second
request overwrite changes in the first.

Or: the second request is always refused; the user has to refresh their
view of the record and re-submit.

There are whole treatises and papers written on these various
approaches. "Manual" locking is just one possible solution, and the
potential downsides are just another consideration (downsides of some
kind apply to most approaches). Also, this all varies considerably
depending on the app, database and hardware, as in many cases a
"transaction" is so short-lived that lock-outs are highly unlikely
(outside of long-running batch processes or many OLTP app users).

So, IMO
1- the control of the scope of a transaction must be in the developer's
hands, or have sensible defaults, with overrides possible. that means,
at minimum, override using an explicit COMMIT or ROLLBACK (and possibly
a matching BEGIN TRAN in some databases)

2- concurrency controls can be automatically applied, at least for SQL,
but require a significant number of concessions by the developer.
PowerBuilder did this for example, the techniques are well known. my
suggestion there is that it is out of the scope of JDBC at this time.

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

Patrick,

As usual, good thoughts :)

> You might say: once one user begins to make _any_
> changes to a record,
> then they have exclusive access to the record until
> changes are complete
> and the record is released.
>
> You could specify differently: if two users try to
> change the same
> record (that they have both read at the same time),
> the requests will
> always be applied synchronously. Once a change has
> been made, the second
> of the two requests can be applied iff none of the
> updates in the second
> request overwrite changes in the first.
>
> Or: the second request is always refused; the user
> has to refresh their
> view of the record and re-submit.

In my experience one of these three approaches is always chosen. It seems like #2 here is the one most frequently used since it requires the least amount of work on the part of the developer. This would be analogous to setting a JDBC connection to auto-commit.

> There are whole treatises and papers written on these
> various
> approaches. "Manual" locking is just one possible
> solution, and the
> potential downsides are just another consideration
> (downsides of some
> kind apply to most approaches). Also, this all varies
> considerably
> depending on the app, database and hardware, as in
> many cases a
> "transaction" is so short-lived that lock-outs are
> highly unlikely
> (outside of long-running batch processes or many OLTP
> app users).

Which is why I really want to leave it in the hands of the data store to do the actual transaction/concurrency work. I figure they've done a much better job than I will be able to do :)

> So, IMO
> 1- the control of the scope of a transaction must be
> in the developer's
> hands, or have sensible defaults, with overrides
> possible. that means,
> at minimum, override using an explicit COMMIT or
> ROLLBACK (and possibly
> a matching BEGIN TRAN in some databases)

I'm a little confused with this if the comment is made regarding the Transaction API in my incubator code. If you haven't browsed it (and I in no sense expect people to have done so!), the Transaction interface is basically used as a marker to indicate that a transaction is taking place. The DataSource communicates back to the Database (or whatever the data store is) to begin a transaction. This is done using the JDBC driver. Presumably the JDBC driver will use the proper syntax for beginning a transaction, rolling it back or committing it, so the developer doesn't have to worry about it. As long as they can set the concurrency/transaction isolation level, they should be ok.

> 2- concurrency controls can be automatically applied,
> at least for SQL,
> but require a significant number of concessions by
> the developer.
> PowerBuilder did this for example, the techniques are
> well known. my
> suggestion there is that it is out of the scope of
> JDBC at this time.

Can you expand on this a little bit? I've never used PowerBuilder, but I have used C++ Builder (which is like Delphi). I don't know how much they have in common.

Are you saying that PowerBuilder handled all of the concurrency stuff on their own on the client side as opposed to letting the Database (or whatnot) do it? Interesting. C++ Builder just let you set concurrency/isolation in their non-visual Database component (or whatever it was called) which would make the appropriate adjustments on the database server.

Richard

rbair
Offline
Joined: 2003-07-08

Bill,

Sorry for the wait. I'll try to outline why the DataSource "knows" about the DataModel, and also why the DataModel "knows" about the DataSource. I think you'll find that while the implementations of the two are tightly coupled, it won't be a problem (in fact, it solves some problems rather nicely).

Ok, to start with we need to agree that DataSources and DataModels go together, and due to their implementation details they are going to need to know something about each other. Or, at the very least, the DataSource is going to have to know about the DataModel.

From the javadoc:

* DataSource and DataModel implementations come in pairs. The DataSource knows
* how to get the information from the data store, but the DataModel
* implementation must know both how to ask for the data (for instance, the sql
* query to execute or the remote method to call) and it must know how to
* configure itself with the results (for example, how to handle an incoming
* RowSet or SOAP response). Hence, a specific implementation of DataSource will
* generally support a specific subset of implementations of DataModel, and
* vice versa.
*

* Two specific benefits of this architecture are:
*

    *
  1. The DataModel manages master/detail information. It constructs the proper
    * query (or whatnot) to retrieve the proper data based on the current record
    * in the master
  2. *

  3. If the DataModel was not already attached to this DataSource and
    * becomes attached and if the DataSource is connected, then this DataSource
    * will immediately populate the given DataModel.
  4. *

For instance, a RowSetDataSource will really only work with a RowSetDataModel since a JavaBeanDataModel wouldn't know what to do if given a RowSet, and a RowSetDataSource wouldn't know how to get the data to the JavaBeanDataModel without specific instructions. Likewise, a RowSetDataSource will work with either a RowSetDataModel, or perhaps even with a ResultSetDataModel since RowSet extends ResultSet.

In any case, I think that the DataSource needs to know about the DataModel. However, why does the DataModel know about the DataSource? I did a quick search of the code to see where the "getDataSources" method is called on the DataModel interface.

Two different situations arise where at the very least it is extremely convenient for the DataModel to know about the DataSource it is bound to. First, when dealing with transactions; second, when refreshing/saving DataModel data due to master/detail events.

For the tx reasons, look at the org.jdesktop.jdnc.incubator.rbair.swing.data.DefaultTransaction code. DataModels can be constructed in a master/detail heirarchy. When any one of those DataModels are added to a Transaction, the entire tree/heirarchy must be added. Also, whenever a DataSource is added to a transaction, all DataModels "owned" by that DataSource are also added to the transaction. This is because a single DataSource maintains a single link to the underlying data store, and therefore a single transaction (consider a JDBC connection , for instance). The algorithm that does this needs to be able to discover what the DataSource that a DataModel is connected to.

Also, consider a master/detail arrangement. The detail DataModel is populated based on the current record in the master. So, lets suppose you changed some data associated with the detail DataModel, then selected a new record in the master. Code must save the changes to the detail DataModel to the data store before flushing the detail DataModel and reloading it.

As you can see, there are a lot of situations where the DataModel needs to know about the DataSource, and vice versa. It makes Transactions mangeable, as well as master/detail heirarchies. I cannot think of anything that is really precluded from this design, except perhaps having a "disconnected" DataModel that could extract data from an SQL DataSource, for instance, detatch itself, and then attach itself to a hibernate DataSource, for instance. That would be kinda rough, but I can see a way to get around that too....

Anyway, I hope that explains my reasoning a bit; I hope I explained it clear enough. Some of these design decisions really take a few chapters in a book to explain properly. Hopefully the end product is easy to use :)

Richard

BTW, the DataSource is really an outgrowth of the original DataLoader in JDNC. It was only for loading, and DataSource does both. Also the DataLoader was kind of awkward to code to; I did a couple early loaders and the messaging and such wasn't as nice as it could have been.

wsnyder6
Offline
Joined: 2004-04-20

Rich,

Thanks for taking time to respond. I definitely agree with the DataSource for a DataModel concept. My only concern is that the DataModel is becoming to 'smart.' I understood the DataModel to be a generic data-holder - it should not know anything about how it is populated. It certainly makes sense to have the DataSource aware of the model - but I am not sure a DataModel should know about its DataSource.

> Two different situations arise where at the very
> least it is extremely convenient for the DataModel to
> know about the DataSource it is bound to. First, when
> dealing with transactions;

*Maybe* there is a way around this. Transaction handling is necessary due to the Master-Detail concept. What if a more abstract API (DataModelEvent/ValueEvent) was used instead of the specific Master-Detail API? Could that simplify the transaction mechanism?

(I could be way off base here. Maybe I need to put together some code to explain what I'm thinking).

Regards,

--Bill

rbair
Offline
Joined: 2003-07-08

> *Maybe* there is a way around this. Transaction
> handling is necessary due to the Master-Detail
> concept. What if a more abstract API
> (DataModelEvent/ValueEvent) was used instead of the
> specific Master-Detail API? Could that simplify the
> transaction mechanism?

Ya, I don't know. I think having the DataModel know about its DataSource in unavoidable because at some point in the code you need to indicate that the DataModel needs to be saved. This needs to be automated because during master/detail navigation the user may have made changes to some field that will need to be persisted, at least to some kind of "session" if not directly to the data store. Since navigation is automated relieving the user of the burden of dealing with it, saving must be automated. If that is true, then the generic navigational code needs to be able to tell the DataSource to save some DataModel. But since a single DataSource can contain many DataModels, you need to be able to specify which DataModel. And since several DataModels could be related to entirely different DataSources, you cannot assume that there is some global DataSource responsible for all DataModels. The long and the short of which is that you either need some kind of Manager that knows what DataModels go with what DataSources, or the DataModel contains a simple reference back to the DataSource. Of these two options, the latter seems the simplest.

Richard

wsnyder6
Offline
Joined: 2004-04-20

> The long and the
> short of which is that you either need some kind of
> Manager that knows what DataModels go with what
> DataSources, or the DataModel contains a simple
> reference back to the DataSource. Of these two
> options, the latter seems the simplest.

From the JDNC (RAD,toolability) perspective, I agree, it does seem the simplest. However , attaching the datasource directly to the model would result in an architecture that is too generic for us.

We are using the .5 DataModel/Form/Binding API. We have a set of (DataSource-like)Adapters that load/save the DataModels. the Adapter is the proxy to the remote service. It contains the basic calls to load and save simple DataModels, as well as more complex loading/saving calls. There are separate Actions/threads that control the DataModl transactions. Our framework looks somewhat similar to what you are proposing in your incubator project. The main difference being that there is no need for the DataModel to know about the DataSource. The Actions know about the adapters it needs and handle transactions.

That being said, I would like to use the JDNC API as much as possible within our app (one reason being to give meaningful feedback to the JDNC project). I think we'll just continue to use the parts that make sense, and incorporate more and more as JDNC matures.

Rich, thanks for your time and comments. Your incubator code is well written and it has been helpful to go thru it and get a sense of where JDNC could go!!

--Bill

rbair
Offline
Joined: 2003-07-08

Hi Bill,

> > The long and the
> > short of which is that you either need some kind
> of
> > Manager that knows what DataModels go with what
> > DataSources, or the DataModel contains a simple
> > reference back to the DataSource. Of these two
> > options, the latter seems the simplest.
>
> From the JDNC (RAD,toolability) perspective, I agree,
> it does seem the simplest. However , attaching the
> datasource directly to the model would result in an
> architecture that is too generic for us.

Ok, this is good. Before we finalize any of the API's I want to know of any situations where the DataSource/DataModel design doesn't work because, frankly, I won't be happy until it does.

If its at all possible I'd love to know the specifics of where the DataSource/DataModel design breaks down. From your description below, I'm not sure what functionality you're missing from my current incubator design.

Thanks again, I really do want to make this API work for you.

Richard

PS> So, even if we did go with the current design in the incubator, you're app would stil work, right? The only different would be that you'd ignore the DataSource and Transaction stuff in the incubator?

wsnyder6
Offline
Joined: 2004-04-20

Hi Rich,

I apologize for the long time in replying; all my time has been taken up by our current cycle.

>
> If its at all possible I'd love to know the specifics
> of where the DataSource/DataModel design breaks down.
> From your description below, I'm not sure what
> functionality you're missing from my current
> incubator design.

I guess I am not really missing anything, I just wanted to give a quick summary of the way I use the DataModel layer.

Basically, what I am saying is that I am not sure I want the DataModel to automatically load and save itself for me. I want/need the flexiblity to save/load myself. I'll see if I can explain it better:

Right now I am working on a small address-book-like app to manage contact information for insurance companies. The package structure looks something like this:

-model (JavaBeanDataModels)
-ui (JForms)
-adapter (proxy to the service layer)
-actions (invokes service proxy to load/save models)

I have a ContactServiceAdapter that loads and persists the DataModel. The adapter has one generic loadAll method. But it also has one called loadByCriteria, loadByAgency, etc. At different times these are called to populate the model.

In terms of the DataSource I would need several LoadTasks to populate a particular DataModel. As I understand it, I can only have one LoadTask per DataSource. (I suppose I could use some sort of strategy pattern in the DataSource to choose the particular LoadTask I need...)

I guess it all comes down to who has control over saving/loading data. For what it's worth, my opinion is that complete control should be given to the developer (even in master-detail contexts). I think designing a DataModel decoupled from its DataSource would be a way to do that.

> PS> So, even if we did go with the current design in
> the incubator, you're app would stil work, right? The
> only different would be that you'd ignore the
> DataSource and Transaction stuff in the incubator?

Yes. The incubator design is flexible enough for us. I am committed to keep using JDNC, so we'll stick with it as much as we can.

Regards,

--Bill

rbair
Offline
Joined: 2003-07-08

Hi Bill,

>I have a ContactServiceAdapter that loads and persists the DataModel. The adapter has one generic loadAll method. But it also has one called loadByCriteria, loadByAgency, etc. At different times these are called to populate the model.

This addresses a very interesting problem alright, and yet a very common one. The current DataModel/DataSource design in pretty one dimensional in that it only lets you specify a single query to execute. And that's kind of dumb of me :)

I'm not really happy with the coupling between the DataSource and DataModel right now. It looks wrong, but I haven't seen how to do it right yet...

>In terms of the DataSource I would need several LoadTasks to populate a particular DataModel. As I understand it, I can only have one LoadTask per DataSource. (I suppose I could use some sort of strategy pattern in the DataSource to choose the particular LoadTask I need...)

Actually, the way I have it right now those various "tasks" would be assigned to the DataModel. Well, really this totally breaks the DataModel design in the incubator :). But the idea would be to keep the queries (which it sounds like is the real meat behind these tasks) in the DataModel. The DataSource is really just managing the pipe back to the server.

Of course, you could always do the loadAll and then implement filtering either on the DataModel layer, or on the GUI layer, but that's only viable for relatively small result sets.

It looks to me like I'm missing a huge use-case here, so I'll see how this could work better.

Thanks!
Richard

wsnyder6
Offline
Joined: 2004-04-20

Rich,

Just another quick note....I know the incubator design was motivated (in part) from the need to have some sort of transactional persistance mechanism in the DataModel.

I wonder if the new new jakarta-commons-transactions API would help approach this problem?? If a DataModel is backed by some transactional collection, it could make it easier to decouple the DataSource from the DataModel.

--Bill

rbair
Offline
Joined: 2003-07-08

Bill --

I haven't finished looking over the commons.transaction API yet, but from what I've seen so far it looks like it is mainly oriented at JavaBeans. Do you know how well it would work with RowSets? Remember also that the real trick is that often transactional behavior is implemented on the server side (in a Database, or perhaps on an EJB based middle tier), thus it needs to know about the transactions as well. A good example is if you have a transactional scheme such that when a second individual opens the same record that a first individual is already editing, they get a read-only copy. Such schemes only work of the server is notified of the beginning of a transaction.

Since the server needs to know about the transaction and since the DataSource is the conduit for communication with the server, the DataSource will need to be used, and it looks lke we end up right where we started.

I think the commons-transactions stuff is intended mostly for folks implementing transactional behavior, not leveraging it. So it would be more useful for an EJB container than JDNC. I think. But I haven't finished reading the API yet, so we'll see.

As for your specific situation, I think you can continue to operate as you were before with your DataLoaders and simply disregard the DataSource. I'm also thinking about how to add multi-query capabilities to the current DataModel. I don't think this will be in the root interface, but rather in the RowSetDataModel. I'll let you know when I have some code to back up the idea :)

Richard

ozeigermann
Offline
Joined: 2006-02-17

Hi Bill, hi Richard, hi all!

Sorry if I miss the point as I had a hard time to work myself through this huge thread.

The transactional maps in commons transaction are much more geared towards concurrent operations than undoing/cancelling of entries. Because of this they make more sense when more than one thread actually tries to access such a map. They are some sort of two level with one working copy for the current transaction and one global version shared by all transactions. A rollback would mean to discard all changes in the working copy, a commit would mean to copy all changes to the global one.

As I understand you DataModel will be accessed by one thread only, thus using commons transaction might be overdosed.

Concerning client side and server side transaction I would partially disagree with Richard's lost post, although IMHO it is a tough question.

If editing data inside a GUI is done inside a server (database) transaction many databases read lock the data until the transaction is ended. In many cases a whole table or at least a large or frequently accessed page is locked. Consider all this is for entering a new customer for an insurance company and the whole user table is blocked for other entries until the guy filling in the GUI fields finishes his/her work. Maybe going to lunch before?

I do not think this will work practically. Maybe it would be a better idea to read the data in one transaction and write in another. Maybe do a sane check before writing back the changes, e.g. if the user has been deleted or modified by someone else.

Just my 0.5 cents.

Oliver

wsnyder6
Offline
Joined: 2004-04-20

> Hi Bill, hi Richard, hi all!
>
> Sorry if I miss the point as I had a hard time to
> work myself through this huge thread.
>
> The transactional maps in commons transaction are
> much more geared towards concurrent operations than
> undoing/cancelling of entries. Because of this they
> make more sense when more than one thread actually
> tries to access such a map. They are some sort of two
> level with one working copy for the current
> transaction and one global version shared by all
> transactions. A rollback would mean to discard all
> changes in the working copy, a commit would mean to
> copy all changes to the global one.
>
> As I understand you DataModel will be accessed by one
> thread only, thus using commons transaction might be
> overdosed.

Most of the time access will be by one thread; I would say there is a chance the client could access the same DataModel with more than one thread.
>
> Concerning client side and server side transaction I
> would partially disagree with Richard's lost post,
> although IMHO it is a tough question.
>
> If editing data inside a GUI is done inside a server
> (database) transaction many databases read lock the
> data until the transaction is ended. In many cases a
> whole table or at least a large or frequently
> accessed page is locked. Consider all this is for
> entering a new customer for an insurance company and
> the whole user table is blocked for other entries
> until the guy filling in the GUI fields finishes
> his/her work. Maybe going to lunch before?
>
> I do not think this will work practically. Maybe it
> would be a better idea to read the data in one
> transaction and write in another. Maybe do a sane
> check before writing back the changes, e.g. if the
> user has been deleted or modified by someone else.

Makes sense. Perhaps the DataModel transaction API can be abstracted to allow the developer to plug in what they need. (a tie in to server-side transactions or reading/writing in different threads, etc)

>
> Just my 0.5 cents.
>
> Oliver

netsql
Offline
Joined: 2004-03-07

wsnyder6,

Sorry to be a pest ( I will drop this thread now).

Transaction on a client side?
So when a blue screen of death happens... what gets you back the transaction?

Transactions are allways Server side, since 3270 days.

.V

rbair
Offline
Joined: 2003-07-08

Hey Mark,

> > I've worked out a bunch of bugs in the master/detail demo, and cleaned it up a bit. The panel to try out is the one that is shown by default which is the java bean data model panel. Here's a link to the web-started demo http://www.jgui.com/jdnc/masterdetail.jnlp
>
> I got the following exception when I clicked on the
> link:
>
> Java Web Start 1.4.2_05 Console, started Mon Oct 04
> 16:30:04 PDT 2004
> Java 2 Runtime Environment: Version 1.4.2_05 by Sun
> Microsystems Inc.
> java.lang.ClassNotFoundException:
> org.hsqldb.jdbcDriver
[snip]

Ok, I've finished the demo (and did a bit of polishing) and tested it on:

1) Slackware Linux, Gnome 2.8, X11R6.8, jdk1.5.0 & j2sdk1.4.2_05
2) Windows XP, jdk1.5.0
3) Windows XP, SP2 java jre 1.4.2_05

The demo has about a 1.5-2 meg download, and downloads the following jars:

1) jdnc-rbair.jar -- contains a branch of the jdnc code, as well as the demo code and some code from the incubator
2) commons-beanutils.jar -- used by the JavaBeanDataModel
3) commons-collections.jar -- used by commons-beanutils
4) commons-logging.jar -- used by commons-beanutils & commons-collections
5) hsqldb.jar -- Hypersonic java database engine, used for our test database
6) jnlp.jar -- probably not necessary, but I needed it when I built the project so here it is
7) rowset.jar -- contains the Java 5 rowset RI
8) looks.jar -- not currently used, but I'm tweaking the demo to use the jgoodies looks LAFs.

I apologize to anybody who tried to run the demo in the past and couldn't because of exceptions when used under java 1.4.

This demo represents the ideas expressed in this thread by Scott, Patrick, Dave, myself and many others. Thanks for taking the time to check it out ;)

Richard

PS> You'll notice that the demo also contains a viewer containing the source code that was used for constructing the gui.

Anonymous

Rich - this demo looks really promising -- I especially appreciate your
trick to provide source access from the menubar (we should do that with all demos).

Couple questions:

- The detail panel doesn't appear to support editing/storing of changes to the fields;
I'm curious why you didn't use JForm or JNForm for that?
In theory you could leverage JForm's factory layout and avoid including all the GridBag/Binding code, no?

- It looks like your JNTable is being wrapped in a JScrollPane, which should be unnecessary since
JNTable comes with a scrollpane by default (this might be why the column control button is not appearing?).

In general I'm also liking many of your API changes in the data layer to support a "true" master-detail scenario.
A demo app definitely speaks louder than words, both in terms of the running client AND the sample source.

The jdnc team really appreciates your concrete work/drive.

rbair
Offline
Joined: 2003-07-08

Hey Aim,

[snip]
> - The detail panel doesn't appear to support
> editing/storing of changes to the fields;
> I'm curious why you didn't use JForm or JNForm for
> or that?
> In theory you could leverage JForm's factory layout
> ut and avoid including all the GridBag/Binding code,
> no?

I'm mostly focusing on the basic technologies first -- bindings, data models, etc. To be honest, I haven't played with the JForm/JNForm yet. Guess I should :)

> - It looks like your JNTable is being wrapped in a
> JScrollPane, which should be unnecessary since
> JNTable comes with a scrollpane by default (this
> might be why the column control button is not
> appearing?).

Thanks. Let me try that out.

> In general I'm also liking many of your API changes
> in the data layer to support a "true" master-detail
> scenario.
> A demo app definitely speaks louder than words, both
> in terms of the running client AND the sample
> source.
>
> The jdnc team really appreciates your concrete
> work/drive.

Thanks :)

Richard

scottr
Offline
Joined: 2004-09-05

> On to the question of master/detail functionality.
> This proposal is based on everything we've discussed
> in multiple threads on this topic, so far. First,
> each DataModel implementation would support a
> 'getDataModel(String)' method that would be used for
> getting a detail DataModel. Either MetaData or some
> other similar mechanism would be used to specify what
> kind of DataModel should be returned. Along with what
> kind of DataModel to return, it would also contain
> information on how to load the DataModel, when to
> load the DataModel, what information in the master
> DataModel to use (foreign keys, etc) to fetch the
> correct information for the detail DataModel, when to
> commit, whether to cache data, etc.
>

This sounds like a good analysis of the situation. Just one thing though, in regards to lazy loading. In the standard JDNC there is DataLoader which is used to load DataModels themselves lazily. I on the other hand thought that a good policy would be to have the detail DataModels always loaded in advance, and the actual data they encapsulate lazy-loaded on demand, and/or cached if previously loaded. This allows components to be bound to the DataModels at form creation time, but then data (if marked as lazy) is still not loaded until requested.

This requires a slightly different loader interface to DataLoader. DataLoader itself is configured with methods to asynchronously load the DataModel itself, loading the metadata first, etc. I'm thinking here of a Loader interface that only requires implementation of asynchronous loading of the data itself.

> Here's a possible way to code it up:
>
> 1) Let DataModel contain the following method: public
> DataModel getDataModel(String key)
> 2) Place in AbstractDataModel the following data
> structure: private Map detailModels = new HashMap();
> 3) Let the map contain a String for a key, and an
> object of type MasterDetailDescriptor for the value.
> 4) MasterDetailDescriptor would be the place to store
> information such as: a policy for cacheing, a policy
> for loading data (lazily, eagerly, on demand), the
> DataLoader to use for loading, etc.
> 5) Customizing the master/detail relationship would
> simply be a matter of changing the descriptor (what
> policy to use, for instance) for a specific detail
> DataModel
>
> You may have noticed that in item #1 the method calls
> for a String named 'key'. In a JavaBeanDataModel
> world the key would simply be the property name that
> you want to base your detail DataModel on; for
> example, "orders". In the RowSetDataModel you don't
> have a column containing detail items like you do
> with a POJO, the details are contained either in
> another RowSet or still in the database if you are
> loading data "on demand". Therefore, the key is some
> value chosen by the developer that describes what is
> contained in the detail DataModel. In this case, it
> would still be "orders". The point to calling it a
> 'key' as opposed to 'fieldName' is that in the
> RowSetDataModel it isn't a field at all, so it could
> be a point of confusion.
>
> I see many of the items in the MasterDetailDescriptor
> being interfaces (LoadingPolicy, for example).
> Sensible default implementations would exist. But by
> making them interfaces we allow individuals to write
> custom implementations as well. I would probably need
> a custom implementation because I eager load a bunch
> of data from the Database. What data is loaded
> depends on the master RowSet, so each master RowSet
> would have a custom loading policy.
>

I think you are on the right track with that implementation.

With regards to lazy loading:

It is the detail data model that needs the custom loading policy more than the master, since it is the detail data that needs refreshing upon changes in the selected master record.

I think that any DataModel implementation, potentially being a master, should call a notification method on all listed detail DataModels upon selected index change. This notification method includes the value that the master currently has for that detail DataModel (if any). That value could be an entire bean, or RowSet, a stub object, or a proxy. The loader evaluates that value to determine if it needs to fetch the data (eg. the value could be an empty proxy that needs filling in with actual values). If it doesn't need loading, it returns immediately. Otherwise, it begins the load, and returns the data asynchronously upon completion.

Scott

rbair
Offline
Joined: 2003-07-08

> This sounds like a good analysis of the situation.
> Just one thing though, in regards to lazy loading. In
> the standard JDNC there is DataLoader which is used
> to load DataModels themselves lazily. I on the other
> hand thought that a good policy would be to have the
> detail DataModels always loaded in advance, and the
> actual data they encapsulate lazy-loaded on demand,
> and/or cached if previously loaded. This allows
> components to be bound to the DataModels at form
> creation time, but then data (if marked as lazy) is
> still not loaded until requested.

I agree that DataModels should all be defined before data ever gets associated with them (especially if the gui is defined in xml!). When the DataSource is connected to the data store it will initiate an asynchronous load of the DataModel. If the data has already been loaded and cached, then the DataSource just hands the data out to the DataModel.

> This requires a slightly different loader interface
> to DataLoader. DataLoader itself is configured with
> methods to asynchronously load the DataModel itself,
> loading the metadata first, etc. I'm thinking here of
> a Loader interface that only requires implementation
> of asynchronous loading of the data itself.

The async loading of meta-data can come in handy in some scenarios so I'm not sure if we want to ditch it. In particular, consider a JTable wired to show the results of an arbitrary query the user executes against the database. The DataModel will need to extract the meta data from the ResultSet every time the query is executed, so the lazy loading of meta-data makes sense in this case.

However, if you have manually set the MetaData, then you probably don't want any automatic meta-data instantiation to occur. DataModel should probably have a "don't touch my meta-data" flag in that case.

Scott, describe this part a little more for me if it still applies to the new architecture idea:

> This notification method
> includes the value that the master currently has for
> that detail DataModel (if any). That value could be
> an entire bean, or RowSet, a stub object, or a proxy.

I'm not sure what you mean by 'value that the master currently has for that detail DataModel'.

The way I was thinking of handling the situation was to have the detail DataModel ask the master for values in the current record which it would then use as the basis for the detail DataModel. Here are a few examples:

Ex 1: If the master DataModel wrapped a java bean and the detail also wrapped java beans, then when the detail is notified that it needs to reload itself, it would ask the master for the current bean and then call the appropriate "get" method on that bean for the detail items (getOrders, for example).

Ex 2: If the master DataModel wrapped a java bean and the detail wrapped an sql result set, then when the detail is notified that it needs to reload itself, it would ask the master for a value on the current record (for example, "customerId") and then make the appropriate sql call against its DataSource to get a ResultSet containing the orders for that customer

Ex 3: If the master DataModel wrapped a RowSet and the detail also wrapped a RowSet, then when the detail is notified that it needs to reload itself it would ask the master for a value on the current record (for example, "customer_id") and then make the appropriate sql call using its DataSource, thereby retrieving a RowSet containing orders for that customer.

In each of these scenarios the master only has to tell the detail that it needs to be reloaded, and the detail takes over from there.

Richard

scottr
Offline
Joined: 2004-09-05

>
> However, if you have manually set the MetaData, then
> you probably don't want any automatic meta-data
> instantiation to occur. DataModel should probably
> have a "don't touch my meta-data" flag in that case.
>
Thats true, there will be situations like that where meta data is dynamically loaded.

> Scott, describe this part a little more for me if it
> still applies to the new architecture idea:
>
> > This notification method
> > includes the value that the master currently has
> for
> > that detail DataModel (if any). That value could
> be
> > an entire bean, or RowSet, a stub object, or a
> proxy.
>
> I'm not sure what you mean by 'value that the master
> currently has for that detail DataModel'.
>
> The way I was thinking of handling the situation was
> to have the detail DataModel ask the master for
> values in the current record which it would then use
> as the basis for the detail DataModel. Here are a few
> examples:
>
> Ex 1: If the master DataModel wrapped a java bean and
> the detail also wrapped java beans, then when the
> detail is notified that it needs to reload itself, it
> would ask the master for the current bean and then
> call the appropriate "get" method on that bean for
> the detail items (getOrders, for example).
>
(And that value returned could be an empty proxy or stub eg. Hibernate proxy, which would also need loading to be filled in)

> Ex 2: If the master DataModel wrapped a java bean and
> the detail wrapped an sql result set, then when the
> detail is notified that it needs to reload itself, it
> would ask the master for a value on the current
> record (for example, "customerId") and then make the
> appropriate sql call against its DataSource to get a
> ResultSet containing the orders for that customer
>
> Ex 3: If the master DataModel wrapped a RowSet and
> the detail also wrapped a RowSet, then when the
> detail is notified that it needs to reload itself it
> would ask the master for a value on the current
> record (for example, "customer_id") and then make the
> appropriate sql call using its DataSource, thereby
> retrieving a RowSet containing orders for that
> customer.
>
> In each of these scenarios the master only has to
> tell the detail that it needs to be reloaded, and the
> detail takes over from there.
>

Yes, you are right with these scenarios, and this was what I was getting at too. The master will not know whether its detail model is set up to lazy load its data, or if it already contains that data itself (eg. as a java bean property).

The only difference between the approach you describe and the one I describe is in who takes responsibility for the querying - does the detail query the master for its appropriate data to use, or does the master tell the detail what data is available and specific for it. I actually like the first approach better (your approach), which is cleaner, but what concerns me is that in java graphs, the detail elements (collections of beans, etc) don't normally have knowledge of their owning parent. Contrast this with RDBMS where the situation is reversed - detail tables have knowledge of their master tables via foreign key specification, but master tables don't automatically have knowledge of their detail tables.

But I think the distinction here is academic. In reality, the wiring up of a master-detail tree would have to configure the parent-child knowledge in both directions. And my gut instinct tells me to go with the event model you have described above anyway.

Scott

rbair
Offline
Joined: 2003-07-08

Hey y'all,

If you have a chance, check out the master/detail demo in the incubator project under src/demo/org/jdesktop/jdnc/incubator/rbair/masterdetail. It's very, very raw in that the only functionality that I'm testing right now is basic master/detail stuff (refreshing the detail information, etc). The master DataModel contains "Item" objects, and the detail DataModel contains a "User" object. You'll note that I could accomplish the same thing with only a single data model if my "detail" text fields were bound to "seller.firstName", for instance, instead of "firstName" on a detail DataModel, but where's the fun in that :).

When you select an item in the List on the left of the window, its detail information and the detail information for its seller are listed on the right. Somethings not right with the JList's selection/master setRecordIndex code, so sometimes it doesn't refresh until *after* you click off the item. Working on that...

Richard

rbair
Offline
Joined: 2003-07-08

I've worked out a bunch of bugs in the master/detail demo, and cleaned it up a bit. The panel to try out is the one that is shown by default which is the java bean data model panel. Here's a link to the web-started demo http://www.jgui.com/jdnc/masterdetail.jnlp

This little app demonstrates a master->detail->detail chain, binding between a JList and a data model, a JTable and a datamodel, and several JTextFields and a data model. This app is not meant to demonstrate saving changes or edits, just synchronization between the various data models and the components.

Rich

Message was edited by: rbair -- added http to the link

Mark Davidson
Offline
Joined: 2006-02-17

Hi Rich,

> I've worked out a bunch of bugs in the master/detail
> demo, and cleaned it up a bit. The panel to try out
> is the one that is shown by default which is the java
> bean data model panel. Here's a link to the
> web-started demo
> http://www.jgui.com/jdnc/masterdetail.jnlp

I got the following exception when I clicked on the link:

Java Web Start 1.4.2_05 Console, started Mon Oct 04 16:30:04 PDT 2004
Java 2 Runtime Environment: Version 1.4.2_05 by Sun Microsystems Inc.
java.lang.ClassNotFoundException: org.hsqldb.jdbcDriver
at com.sun.jnlp.JNLPClassLoader$1.run(JNLPClassLoader.java:259)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.jnlp.JNLPClassLoader.findClass(JNLPClassLoader.java:247)
at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:141)
at org.jdesktop.jdnc.incubator.rbair.masterdetail.MasterDetailWindow.(Unknown Source)
at org.jdesktop.jdnc.incubator.rbair.masterdetail.Main.main(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at com.sun.javaws.Launcher.executeApplication(Launcher.java:837)
at com.sun.javaws.Launcher.executeMainClass(Launcher.java:797)
at com.sun.javaws.Launcher.continueLaunch(Launcher.java:675)
at com.sun.javaws.Launcher.handleApplicationDesc(Launcher.java:390)
at com.sun.javaws.Launcher.handleLaunchFile(Launcher.java:199)
at com.sun.javaws.Launcher.run(Launcher.java:167)
at java.lang.Thread.run(Thread.java:534)

It looks like the JDBC driver is missing.

--Mark

rbair
Offline
Joined: 2003-07-08

Hey Mark,

Sorry, about that. The jnlp file has been fixed.

Richard

rbair
Offline
Joined: 2003-07-08

BTW, there will be exceptions thrown during the execution of the demo, but none of them should affect the demo. They are related to the RowSet portion that isn't finished yet (some nasty threading race condition has got me beat at the moment). But you should be able to view the java beans portion of the demo perfectly.

Ann Sunhachawee

Hi all-
I've been trying to launch Rich's master/detail demo, but I'm having
problems. I get the Java Web Start Dialog, and it seems to freeze at
"Loading jdnc-rbair.jar from www.jgui.com \n Read 8K of 1.6M (0%) \n
Waiting for data..."

Any common causes for this? I have been able to launch the demos on the
jdnc.java.net site successfully.
I'm running WinXP SP2, and I have a few versions of the JDK - 1.4.2 and
1.5, so I'm not sure which it is picking up (and how to figure that out)

Thanks for your help.
Ann

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08

Hello Ann,

Hmmm... I'm not sure why its hanging on you. Unfortunately, the demo isn't mirrored anywhere either. If you'd like, you could send me an email at rich_jdnc@mail.autocraft-services.com and I'll reply with the jdnc-rbair.jar and a bat file for executing the jar. There are some other jars you'll have to download (too big for an email), but I'll be happy to provide links.

Richard

wsnyder6
Offline
Joined: 2004-04-20

Hello Rich,

I've spent some time reading the various discussions on DataModels and looked thru your incubator code. (Lots of good stuff in there!) We are using the JDNC Binding framework (and a few swingx components). It seems that a lot of us our trying to implement similar ideas regarding DMs. Go figure :)

A question about your proposed DataSource/Transaction API - Why did you choose to make the DM aware of the DataSource, rather than use a decoupled DataLoader/Adatper?

Regards,

--Bill

rbair
Offline
Joined: 2003-07-08

Hi Bill,

I'm really sorry that I haven't been able to get to this sooner. I just took on with Sun, and all of the work involved in the adjustment (moving, interviews, etc) have pretty much cut me off from the internet. I'll post a proper response to your question within the next couple of days.

Sorry again,
Rich

rbair
Offline
Joined: 2003-07-08

Patrick,

I saw an email with a reply from you for this thread (it had a lot of good stuff regarding testing), but it hasn't shown up in the forums yet. Just a heads up, but you may have to repost. I don't have a copy of the email any longer either (thinking it would be here on the forums!).

Aim, any eta on the sandbox? Is it going to be a branch, or another project entirely?

Richard

Message was edited by: rbair

Ha, nevermind patrick, it finally showed up!

scottr
Offline
Joined: 2004-09-05

Hi Guys,

I've just shifted over from another thread discussing master detail issues (http://www.javadesktop.org/forums/thread.jspa?threadID=4127&tstart=0), and just took a while to catch up on the discussions on this thread.

I agree that having a community sandbox where we could post code for testing is a good idea. One thing that occurs to me that would be useful is if we agree upon and set up a test database structure with test data that can be used as a reference point to test different designs against. Sql scripts and ant build scripts could be placed into the sandbox for the creation and population of such a test database.

HSQL seems to be commonly used for this purpose, and its nature (Java in-process engine, lightweight and open-source) lends itself well to test harnesses like this.

Anyway, I'm sure we could come up with a moderately complex sample database similar to the 'Northwoods University' or 'Clearwoods Trading Company' used by Oracle for training.

Similarly, if anyone is testing designs against O/R tools like Hibernate or JDO, then any mapping or configuration resources could also be placed there for commmon use. I'm fairly confident that for a standard test db structure these resources would also be fairly stable.

Scott