Skip to main content

approaching master detail tables

107 replies [Last post]

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
rbair
Offline
Joined: 2003-07-08
Points: 0

> Hi Guys,
>
> I've just shifted over from another thread discussing
> master detail issues
> (http://www.javadesktop.org/forums/thread.jspa?threadI
> D=4127&tstart=0), and just took a while to catch up
> on the discussions on this thread.
>
> I agree that having a community sandbox where we
> could post code for testing is a good idea. One thing
> that occurs to me that would be useful is if we agree
> upon and set up a test database structure with test
> data that can be used as a reference point to test
> different designs against. Sql scripts and ant build
> scripts could be placed into the sandbox for the
> creation and population of such a test database.
>
> HSQL seems to be commonly used for this purpose, and
> its nature (Java in-process engine, lightweight and
> open-source) lends itself well to test harnesses like
> this.
>
> Anyway, I'm sure we could come up with a moderately
> complex sample database similar to the 'Northwoods
> University' or 'Clearwoods Trading Company' used by
> Oracle for training.
>
> Similarly, if anyone is testing designs against O/R
> tools like Hibernate or JDO, then any mapping or
> configuration resources could also be placed there
> for commmon use. I'm fairly confident that for a
> standard test db structure these resources would also
> be fairly stable.
>
> Scott

Sounds good to me. I wonder if there is a database out there in the open source world we could borrow...

After some looking around, MySql has a sample database (http://dev.mysql.com/get/Downloads/Manual/world.sql.gz/from/pick) that looks like its free to use. We may have to email mysql and ask.

Richard

Message was edited by: rbair

Amy Fowler

Rich, Dave, Patrick, [shahbaz],

Once we open up the incubator project, it would be great to see
one or more of you prototype some of these interesting API ideas
for dealing with these master/detail concepts. The community could
then experiment directly with them, which is the sort of feedback-loop
we need to get it right. Once we gain consensus on an approach, we
can then pull the solution into jdnc-proper.

Aim

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

Patrick Wright

Hi Richard

A couple of days behind in this discussion, will try not to cover old ground.

My thoughts are that if (as the last couple of days of discussion I think
are showing) we don't address the many issues separately, we will sink. I
think the goals are clear as you put them...

First are two major areas of concern: JavaBeans and RDBMS. My sense is
that we should approach these two separately, then look to find the parts
that can be generalized and build common interfaces for our goals, as far
as possible. I think that for transactions, loading behavior, persistence
behavior, key linking--there is a lot of work already done in this in
other tools, but for things like transactions, this is part of what
defines an RDBMS. Whereas for JavaBeans, we have no standard on how they
are loaded, or what persistence means, much less transactions,
initialization etc. People approach beans in many different ways.

My guess is we might have two or three types of bean use cases to start
with. One is where the beans are persisted using an OR mapping layer like
Hibernate; the second is where the beans all reside in memory and can be
traversed perfectly and with no latency using the object graph; and third,
where beans are returned as value objects from a J2EE server (or SOAP
service). For each of these, we may have to approach and think about the
problem differently.

So, treat beans and SQL as different data source topics and drive
individual designs around them.

Second, I suggest we take the features of this new API and approach them
in separate discussions or one at a time. That is: linking, transactions,
persistence, intialization, validation, navigation, etc. On transactions
we probably want to be very careful in not designing anything new as there
are already standard Java APIs designed for this.

Third, let's recognize we are trying to (and need to) design for a range
of control, let's call it 'autonomy'. At full autonomy, out of the box
with no intervention, you can link two models, navigate rows in sync, have
them both wrapped in the same transaction, etc. This is what tools like MS
Access provide you. The downside is you are pretty limited to a specific
feature set (linking two tables in the same database with a declared,
shared key across a single connection, etc.)

The range of autonomy in our design might look like
Full autonomy
Full autonomy, with overrideable settings by developer
Mixed autonomy, where developer can intercept and modify events
No autonomy, developer binds it all together

Something like that. It would be great to have out-of-the-box working
datamodel binding, and also be able to intercept events and adjust the
behavior for more complex situations.

Moving forward: I suggest we split the discussion and start a new thread,
only addressing linking between two datamodels, how key mapping is
specified, etc. Then pick one related topic (synchronized navigation,
loading) and dig into that.

If ppl think this isn't the right way to approach the discussion, I am
glad to back off and follow how it goes otherwise.

I generally like where you are heading with pts 1-4.

-Patrick

> All,
>
> We've done a good job laying out the design goals, but we've failed to
> come to a design decision. I'm going to summarize what we've discussed,
> and then make a couple of proposals that we can debate. Hopefully at the
> end, we'll all be happy, post an RFE and finally have world peace :)
>
> 1) Flexible key linking relationship (Patrick)
> 2) Flexible loading behavior (Patrick)
> 3) Flexible transaction scope (Patrick)
> 4) Flexible persistence behavior (Patrick)
> 5) Flexible initialization behaviour (Dave)
> 6) Data model agnostic (Richard)
>
> Ok, here's the first proposal. Its based on current JDNC design, with a
> modified Binding which allows binding to Objects, not just Components.
>
> --------------------------------------------------
>
> 1) A DataModel is an island. It doesn't know about anything outside of
> itself, with the exception of listeners. It knows who has registered as a
> listener and will notify listeners when events happen (such as a change in
> the current record, or a new record is added, or a record removed, etc).
>
> 2) A Binding is used to bind a master DataModel to a detail DataModel. The
> Binding knows how to populate the detail DataModel by using the proper
> loader and performing the key linking/loading.
>
> 3) A Binding is used between each DataModel and the gui components bound
> to it. These bindings register themselves with the components, if
> necessary, to detect selection events. For instance, a ListBinding might
> fill a JList with a row for every record in the DataModel. It might also
> listen for selection events and set the current record index in the
> DataModel to the corrosponding selected element in the List.
>
> 4) MasterDetail bindings leave two methods unimplemented -- save & load.
> These methods are called by the binding automatically at the proper times.
> These tasks can be deferred to a Loader/Saver that performs the operation
> in a separate thread, or the implementation can be handled in a custom
> manner.
>
> 5) The DataModel has a method for saving, loading/refreshing, undo, redo,
> txStart, txEnd, deleteRecord, addRecord, etc.
>
> -----------------------------------------------------
>
> The second proposal is similar, but removes the MasterDetail binding.
> Instead, each DataModel has a 'setMaster' method. The save and load
> methods in DataModel will either contain the code itself (requiring an
> AbstractDataModel that would be extended for almost everything), or
> alternatively the DataModel would have a PersistenceHandler interface that
> would be provided to save/load information.
>
> ---------------------------------------------------
>
> A third proposal is a combination of the first two: the underlying
> architecture would follow proposal #1, and a higher level abstraction
> along the lines of #2 would exist for such standard schemes as SQL based
> queries.
>
> ---------------------------------------------------
>
> Thoughts?
>
> Richard

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

> First are two major areas of concern: JavaBeans and RDBMS. My sense is that we should approach these two separately, then look to find the parts that can be generalized and build common interfaces for our goals, as far as possible.
[snip]

You make a good point here, although I would approach it from the opposite direction. I'd prefer to start defining things that we know work for both approaches (the simple stuff, like navigation, master/detail linking, etc) and see where it gets us. If we find the two approaches to be unreconcileable, then we can split the API at that point.

> My guess is we might have two or three types of bean use cases to start with. One is where the beans are persisted using an OR mapping layer like Hibernate; the second is where the beans all reside in memory and can be traversed perfectly and with no latency using the object graph; and third, where beans are returned as value objects from a J2EE server (or SOAP service). For each of these, we may have to approach and think about the problem differently.

You left out simple sql :). We might have a hard time writing test cases for most of these. We should definately keep all the approaches in mind as we go along. Does anybody know how we could write good test cases for these various methodologies?

> Second, I suggest we take the features of this new API and approach them in separate discussions or one at a time. That is: linking, transactions, persistence, intialization, validation, navigation, etc. On transactions we probably want to be very careful in not designing anything new as there are already standard Java APIs designed for this.

Sounds sage to me. I've drafted a proposed schedule for feature discussions:

Round 1: Navigation, Master/Detail, any nagging concerns regarding the current DataModel specification.

Round 2: Loading, Persistence

Round 3: Add/Remove records, undo/redo

Round 4: Transactions

[snip]
> Moving forward: I suggest we split the discussion and start a new thread, only addressing linking between two datamodels, how key mapping is specified, etc. Then pick one related topic (synchronized navigation, loading) and dig into that.

I'm happy to keep the discussion in this thread, but starting a new one is fine too if that's preferable.

Richard

Patrick Wright

Richard

Thanks for your response.

> [snip]
>
> You make a good point here, although I would approach it from the opposite
> direction. I'd prefer to start defining things that we know work for both
> approaches (the simple stuff, like navigation, master/detail linking, etc)
> and see where it gets us. If we find the two approaches to be
> unreconcileable, then we can split the API at that point.

OK by me. My bets are that we will need to split sooner rather than later,
but let's try and make it work.

>
>> My guess is we might have two or three types of bean use cases to start
>> with.
[snip]
>
> You left out simple sql :).

That was the other category--I was talking about JavaBeans in these
examples. I can imagine for SQL we might have a handful of options as
well, perhaps: list, single row, paginated? I think with SQL the work gets
more interesting around issues of persistence, although there are neat
auto-navigation things we can work on later, using defined keys in the
database.

> We might have a hard time writing test cases
> for most of these. We should definately keep all the approaches in mind as
> we go along. Does anybody know how we could write good test cases for
> these various methodologies?

Some ideas:
- Decide on a database.

Anything related to setup for SQL will be a time drain if we have to write
our own DB, so I suggest we use a database from an open-source project,
with a database that is easy to setup and configure, like Hypersonic, or
Cloudscape...I find MySQL a bit of a hassle to work with because I don't
use it often but that is OK with me.

For the database, I might suggest Ashkelon, a project on sourceforge
(http://ashkelon.sourceforge.net/) that pulls entire Java APIs into the
database using a doclet. It would be easy to populate, has an interesting
structure, and we don't have to define any new data structures or data.

- For memory-present beans, use XML

For non-database backed beans, suggest we use XML, either mapped to beans
using JAXP, or just use one of the DOMs that supports the JB spec. Then we
can have one XML file as our datasource for testing, and can remove
latency issues while working on navigation and linking. Again, don't need
to argue about containment/extension, class design, etc.

- Punt on web/remote access for now

For now, suggest we punt on developing and testing for value objects
returned from a EJB or SOAP or other remote execution--unless someone can
script this easily, I think it will be a difficult to make an easy to use
test environment so we can focus on the model API we are working with.
Hopefully we can 'plug in' these data sources once we have gotten the API
working.

- Testing

: if we are using a specific datasource (that we all agree on), then setup
would mean establishing connections (for SQL) or loading the XML (for
memory-resident).

: for linking, tests would need to be divided into 1:1 mappings and 1:M
mappings to begin with, I think.

: for 1:M we can test that the detail has the expected number of rows
after navigation...?

: for synchronized navigation, should we have routines that can test for
key values on a row by row basis? E.g. that the rows loaded have the key
that I expect them to (after navigation)

> Sounds sage to me. I've drafted a proposed schedule for feature
> discussions:
>
> Round 1: Navigation, Master/Detail, any nagging concerns regarding the
> current DataModel specification.
>
> Round 2: Loading, Persistence
>
> Round 3: Add/Remove records, undo/redo
>
> Round 4: Transactions

Cool.

>
> [snip]
>> Moving forward: I suggest we split the discussion and start a new
>> thread, only addressing linking between two datamodels, how key mapping
>> is specified, etc. Then pick one related topic (synchronized navigation,
>> loading) and dig into that.
>
> I'm happy to keep the discussion in this thread, but starting a new one is
> fine too if that's preferable.

As long as we can keep the discussion focused on one topic at a time, it
is fine with me. I don't want to cut off discussion, just would like to
zero in on one aspect of the problem at a time.

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

>Some ideas:
>- Decide on a database.
>
>Anything related to setup for SQL will be a time drain if we have to write our own DB, so I suggest we use a database from an open-source project, with a database that is easy to setup and configure, like Hypersonic, or Cloudscape...I find MySQL a bit of a hassle to work with because I don't use it often but that is OK with me.

I guess Cloudscape has become a new project at Apache called Derby. The site looks very, very raw. Hypersonic looks great. There shouldn't be a problem, but it has its own license and Sun lawyers may need to go through it before any hypersonic jars could find their way into the jdnc source. However, the tests will be for JDBC, so no big deal. My vote, +1 for Hypersonic. We should use the in memory database server for tests, and construct the database from scratch at the beginning of the tests.

>For the database, I might suggest Ashkelon, a project on sourceforge (http://ashkelon.sourceforge.net/) that pulls entire Java APIs into the database using a doclet. It would be easy to populate, has an interesting structure, and we don't have to define any new data structures or data.

Cool, as long as the memory-present beans use the same data. I would like to see the beans and the database model the same data so we can compare apples to apples. Besides, for presentations and demos nothing is cooler than showing developers the same app with two entirely different back ends.

>- For memory-present beans, use XML

>For non-database backed beans, suggest we use XML, either mapped to beans using JAXP, or just use one of the DOMs that supports the JB spec. Then we can have one XML file as our datasource for testing, and can remove latency issues while working on navigation and linking. Again, don't need to argue about containment/extension, class design, etc.

+1

> - Punt on web/remote access for now

+1 here too. I'll be playing with the design against a remote webservice of my own, so I'll be able to provide *some* feedback in this arena and make sure our design doesn't hose my project. I can't speak for web apps in general though.

>- Testing
>
>: if we are using a specific datasource (that we all agree on), then setup would mean establishing connections (for SQL) or loading the XML (for memory-resident).
>
>: for linking, tests would need to be divided into 1:1 mappings and 1:M mappings to begin with, I think.
>
>: for 1:M we can test that the detail has the expected number of rows after navigation...?
>
>: for synchronized navigation, should we have routines that can test for key values on a row by row basis? E.g. that the rows loaded have the key that I expect them to (after navigation)

These sound like a good start. We should probably define the test cases as we tackle each issue (navigation, master/detail, loading, etc).

Richard

Patrick Wright

Hi Richard

[snip]
>>For the database, I might suggest Ashkelon, a project on sourceforge
>> (http://ashkelon.sourceforge.net/) that pulls entire Java APIs into the
>> database using a doclet. It would be easy to populate, has an interesting
>> structure, and we don't have to define any new data structures or data.
>
> Cool, as long as the memory-present beans use the same data. I would like
> to see the beans and the database model the same data so we can compare
> apples to apples. Besides, for presentations and demos nothing is cooler
> than showing developers the same app with two entirely different back
> ends.

Sounds like a plan. If we use Ashkelon, we'd then have the following steps

: pick an Ashkelon database release to standardize on (as there will
probably be minor structural changes over time)

: choose our RDBMS

: clean up the db build script for our target RDBMS (and once it works,
check it in to CVS!)

: prepare data load script (possibly by loading with Ashkelon tools and
then dumping)

: prepare XML extract using same data

That is a start--then there are connection settings, ant scripts,
mapping...I have the code for Ashkelon at home and will look at it later
today to see what sort of mapping he is doing for queries.

Ashkelon is interesting but I am open to another one as well. What I like
is that it is supported, it is easy to add data to (as it uses a doclet on
any API we choose), and the relationships sound interesting (classes,
packages, methods, all interlinked).

-pw

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Patrick,

I don't really have a preference for what data to model, Ashkelon is fine, or a contrived model of our own. If we doubled the test data as demo data, then I'd probably lean towards a 'Northwinds' style database since the demo would be aimed at VB type developers at least as much as geeky javadoc/source code type developers (like myself :)). I wonder if Scott or Dave has a preference?

>: choose our RDBMS

I'm voting for Hypersonic.

rbair
Offline
Joined: 2003-07-08
Points: 0

Gilles posted a message in another thread mentioning a hibernate script for a database modeling an auction that we could use (it's LGPL). I thought it might work for a nice starting point. The script (modified) can be found on my 'play' website at www.jgui.com. I haven't run it against hypersonic yet, but wanted to post it so I can get input as to whether this is a good direction or not.

Patrick, if you want to use Ashkelon instead and have invested some time in it, I'd be perfectly happy to go that route as well. I just wanted to move forward as fast as I can.

Also, word is that the incubator project is up and running. I assume we'll be coding in there. The 'etiquette' section suggests each committer have their own package named after their uid for playing under. I assume it would be alright to have a joint one as well named after the feature?

Anybody who has signed & faxed in the JCA can commit to the project, so we should be able to code pretty quickly now. The incubator doesn't have its own forums yet, but it may be a good idea. I'll ask Brian Beck what kind of work that entails. Until then I think we should keep the discussion here.

Richard

Patrick Wright

> Gilles posted a message in another thread mentioning a hibernate script
> for a database modeling an auction that we could use (it's LGPL). I
> thought it might work for a nice starting point. The script (modified) can
> be found on my 'play' website at www.jgui.com. I haven't run it against
> hypersonic yet, but wanted to post it so I can get input as to whether
> this is a good direction or not.
>
> Patrick, if you want to use Ashkelon instead and have invested some time
> in it, I'd be perfectly happy to go that route as well. I just wanted to
> move forward as fast as I can.

I don't personally have any time the next few days to set anything up, so
whoever gets something working will lead the day.

-Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

gphilipp
Offline
Joined: 2003-06-10
Points: 0

(Moved from http://www.javadesktop.org/forums/thread.jspa?threadID=4613&messageID=26...)

About the data model, i would suggest using the one showcased by Hibernate : http://caveatemptor.hibernate.org, so we don't have to create new structures. The data model is quite close (somewhat downgraded though) to what you would find in an enterprise-class model. Don't forget that JDNC is primarily designed to target enterprise developers. It would be also a good way to get exposure for the project.

As an alternative, spring also has its own demo application (petclinic) and as far as I know, the spring-rich-client (http://www.springframework.org/spring-rcp.html) project uses this model for its sample RIA application. (as a side note, you can find their public forum here : http://forum.springframework.org/viewforum.php?f=6)

About the database, hypersonic is a safe bet because it's well known, small, and hibernate already has a dialect for it. CaveatEmptor uses it by default (although of course hibernate is database agnostic).

Gilles Philippart
Senior IT consultant
http://www.beijaflore.com

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Gilles,

I took your suggestion and went to see what hibernate had to offer us in terms of a data model, and I think it will work great. I extracted the schema they use for their test database, and added a few fictitous auction users and items. The schema I generated is in cvs at https://jdnc-incubator.dev.java.net/source/browse/jdnc-incubator/src/jav...

I haven't tested this schema for errors yet, I just wanted to post this to keep everyone up to date as to where we're at.

Richard

rbair
Offline
Joined: 2003-07-08
Points: 0

Since I think we've covered the basics for testing, I think its time to get started on the DataModel design itself. First on our list is DataModel navigation. So that we are all on the same page, let me quote from scottr who gave an excellent overall view of the DataModel/Binding architecture (edited for clarity):

"I agree that a DataModel implementation should support lazy loading if needed... As far as the view code (ie. the form) and the Bindings are concerned, they only see a DataModel, and bind components to various nodes on the DataModel graph in an agnostic way.

As I see it, a DataModel describes the 'structure' of the underlying data (javabeans, maps of maps, etc) as well supporting read/write to that underlying data. The Binding classes glue view elements (eg. swing components) to nodes on that structure, either by evaluating path expressions, or by manually walking the DataModel tree down to the DataModel node it wants to bind to. The DataModel draws its knowledge of the structure from MetaData, which can be configured from XML, or introspection on BeanInfo, or WSDL, or by some other service.

Thus, when initially creating the form, the form can be built from the DataModel even when the model is only partially or not at all loaded, because it binds components based on the structure, not the value of the underlying data.

Then, at rendering time, all the form Bindings request actual values from the DataModel node they bind to (via 'pull' methods). When the [DataModel] changes (eg. a user clicks a different row in a master table, or clicks a 'next' button), the form Bindings re-request values from the model...

By putting all the emphasis on the DataModel to handle these issues, the view becomes very 'dumb'. It would not be very difficult to modify the Binding interface in this case to bind to any generic object (ie. instead of a Swing component), and you can then re-use the same DataModel API to bind to other view technologies (eg. JSF, etc). But that is probably outside of this discussion though."

In summary, we have DataModel->Binding->GuiComponent. DataModels support multiple records (or rows). Bindings will pull data from the DataModel into the GuiComponent whenever 1) the current record changes in the DataModel, or 2) the value in the DataModel that the GuiComponent is bound to changes (for instance, if another component also bound to the value updates it), or 3) possibly if the MetaData on the DataModel changes.

With that background the interaction between the navigational methods and the bound gui components should be clear. Navigational methods will move the current record index, which in turn fires an event. Bindings hear the event and update gui components accordingly. The only gotcha here is that a gui component could be in the midst of an update. So it may be necessary to read values from the component(s) into the DataModel prior to moving the current record index.

The next question is whether beforeFirst and/or afterLast functionality should be supported. In addition to the current setRecordIndex method, the following methods should be supported for navigation:

public boolean firstRecord();
public boolean prevRecord();
public boolean nextRecord();
public boolean lastRecord();
public boolean hasNext();
public boolean hasPrev();

Richard

scottr
Offline
Joined: 2004-09-05
Points: 0

> DataModels support multiple records (or rows).

I commented in another thread that DataModel is currently oriented towards singular records (ie. single JavaBean, Map, RowSet, etc), with only nominal support for 2 dimensional (ie. collection style) data structures, in the form of a selected index property.

I think that we should treat DataModels as wrapping 2-dimensional tabular structures by default, and add methods to support these. The existing methods then exist to support the singular record case as current.

It occurred to me that modification of the selection indices could be encapsulated by a separate selection model, similar to the way JTables use a ListSelectionModel.

[My Edit - scottr]
By separate selection model, I mean still accessed through the DataModel as a property. But then that could be overkill.

> With that background the interaction between the
> navigational methods and the bound gui components
> should be clear. Navigational methods will move the
> current record index, which in turn fires an event.
> Bindings hear the event and update gui components
> accordingly. The only gotcha here is that a gui
> component could be in the midst of an update. So it
> may be necessary to read values from the component(s)
> into the DataModel prior to moving the current record
> index.
>

This probably should not be too much of an issue. Hopefully if we are following good Swing practices, updates to GUI components should all be performed on the EventDispatchThread, so updates should not overlap each other.

> The next question is whether beforeFirst and/or
> afterLast functionality should be supported. In
> addition to the current setRecordIndex method, the
> following methods should be supported for
> navigation:
>
> public boolean firstRecord();
> public boolean prevRecord();
> public boolean nextRecord();
> public boolean lastRecord();
> public boolean hasNext();
> public boolean hasPrev();
>

I think the methods you have mentioned above are very specific to a jdbc ResultSet/RowSet model, which may or may not translate well to a bean collection or a 2-d array model. I think the minimum that should be required are:

public void setSelectedIndex(int index);
public int getSelectedIndex();

Presuming that DataModel also supports tabular query methods (like querying for row count), then it becomes an easy matter to write a generic navigation actions that perform standard iteration.

Scott

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Scott,

> > The next question is whether beforeFirst and/or
> > afterLast functionality should be supported. In
> > addition to the current setRecordIndex method, the
> > following methods should be supported for
> > navigation:
> >
> > public boolean firstRecord();
> > public boolean prevRecord();
> > public boolean nextRecord();
> > public boolean lastRecord();
> > public boolean hasNext();
> > public boolean hasPrev();
> >
>
> I think the methods you have mentioned above are very
> specific to a jdbc ResultSet/RowSet model, which may
> or may not translate well to a bean collection or a
> 2-d array model. I think the minimum that should be
> required are:
>
> public void setSelectedIndex(int index);
> public int getSelectedIndex();
>

My last project used a DataModel like entity that wrapped objects alone, and I used all of these methods extensively. If a DataModel were to wrap a single object, then its obviously overkill, but I just as often would wrap a collection of objects in which case the navigational code was nice.

The point is well taken that the methods I listed are not necessary since the logic to implement them is taken from getRecordCount, getRecordIndex and setRecordIndex. However, I think they provide a nice abstraction in that you don't have to take the time (or risk the bugs) in implementing a next method since the DataModel would do the work for you.

> Presuming that DataModel also supports tabular query
> methods (like querying for row count), then it
> becomes an easy matter to write a generic navigation
> actions that perform standard iteration.

Take a look at https://jdnc-incubator.dev.java.net/source/browse/jdnc-incubator/src/jav... for a class that wraps a list of objects for iteration. Using this class, adding support for navigational methods on a JavaBeanDataModel would be trivial.

Richard

scottr
Offline
Joined: 2004-09-05
Points: 0

>
> My last project used a DataModel like entity that
> wrapped objects alone, and I used all of these
> methods extensively. If a DataModel were to wrap a
> single object, then its obviously overkill, but I
> just as often would wrap a collection of objects in
> which case the navigational code was nice.
>
> The point is well taken that the methods I listed are
> not necessary since the logic to implement them is
> taken from getRecordCount, getRecordIndex and
> setRecordIndex. However, I think they provide a nice
> abstraction in that you don't have to take the time
> (or risk the bugs) in implementing a next method
> since the DataModel would do the work for you.
>

Thats probably right. And most standard forms I've seen provide fairly common iteration as you've described.

The code link you provided looks like a good way to wrap that functionality for a bean collection. I was specifically thinking about the implications of supporting direct indexed access (eg. setSelectedIndex(10)) for RowSets/ResultSets. Collections can support that, but for RowSets/ResultSets you need jdbc3 compatability, or iterate through all intermediate records.

Scott

dhall
Offline
Joined: 2006-02-17
Points: 0

Just to check in -- I'm back online (at work, anyway). It'll take a little time to catch up.

rbair
Offline
Joined: 2003-07-08
Points: 0

On to the question of master/detail functionality. This proposal is based on everything we've discussed in multiple threads on this topic, so far. First, each DataModel implementation would support a 'getDataModel(String)' method that would be used for getting a detail DataModel. Either MetaData or some other similar mechanism would be used to specify what kind of DataModel should be returned. Along with what kind of DataModel to return, it would also contain information on how to load the DataModel, when to load the DataModel, what information in the master DataModel to use (foreign keys, etc) to fetch the correct information for the detail DataModel, when to commit, whether to cache data, etc.

Here's a possible way to code it up:

1) Let DataModel contain the following method: public DataModel getDataModel(String key)
2) Place in AbstractDataModel the following data structure: private Map detailModels = new HashMap();
3) Let the map contain a String for a key, and an object of type MasterDetailDescriptor for the value.
4) MasterDetailDescriptor would be the place to store information such as: a policy for cacheing, a policy for loading data (lazily, eagerly, on demand), the DataLoader to use for loading, etc.
5) Customizing the master/detail relationship would simply be a matter of changing the descriptor (what policy to use, for instance) for a specific detail DataModel

You may have noticed that in item #1 the method calls for a String named 'key'. In a JavaBeanDataModel world the key would simply be the property name that you want to base your detail DataModel on; for example, "orders". In the RowSetDataModel you don't have a column containing detail items like you do with a POJO, the details are contained either in another RowSet or still in the database if you are loading data "on demand". Therefore, the key is some value chosen by the developer that describes what is contained in the detail DataModel. In this case, it would still be "orders". The point to calling it a 'key' as opposed to 'fieldName' is that in the RowSetDataModel it isn't a field at all, so it could be a point of confusion.

I see many of the items in the MasterDetailDescriptor being interfaces (LoadingPolicy, for example). Sensible default implementations would exist. But by making them interfaces we allow individuals to write custom implementations as well. I would probably need a custom implementation because I eager load a bunch of data from the Database. What data is loaded depends on the master RowSet, so each master RowSet would have a custom loading policy.

Looking forward to the discussion :)

Richard

rbair
Offline
Joined: 2003-07-08
Points: 0

Over the past couple of days I have been researching how other programming environments implement master/detail functionality. I've seen some good ideas, and some weird ones. This research has led me to consider an alternate master/detail design. I've drawn a rough design, saved it as a jpg and placed it in cvs (for lack of a better place) at https://jdnc-incubator.dev.java.net/source/browse/jdnc-incubator/src/jav...

As you can see, its not a very complicated concept. The hardest part was figuring out which program to use to draw it! The network cloud is, of course, removeable so that the data store is located in the same space as the DataSources and DataModels. This design has a new interface called DataSource. This DataSource is not to be confused with the javax.sql.DataSource interface. Rather, this is an abstract akin to the DataLoader. In fact, it will either contain a DataLoader or will supercede it. The Datasource handles the following responsibilities:

1)Responsible for loading data from the data store & populating DataModels
2)Knows which DataModels it must populate and how to get data for them
3)Responsible for saving DataModel data (persistence)
4)Can services multiple DataModels
5)Loads data asynchronously into the DataModel, and saves data asynchronously from the DataModel.

The DataSource may also be responsible for transaction demarcation, but that's a different discussion.

The DataModel handles the following responsibilities:

1)Contains public load, save, and refresh functions
2)Acts as the gui's main interface (the gui only interacts with DataModels and Bindings)
3)Contains a setMasterDataModel(DataModel, String key) method. Can only be linked with one master DataModel
4)Contains references to each DataModel for which it is the master. These DataModels are referenced by key. Contains a getDataModel(String key) method for retreiving a detail data model by key.
5)Has a setMasterDetailBinding(MasterDetailBinding b) method. Some other name could be used besides *Binding.
6)Has a setDataSource(DataSource ds) method
7)Propagates the proper events to detail DataModels. For instance, if the current record changes then the master DataModel informs detail DataModels of the change so they can repopulate themselves. It also propagates saves, loads, and refereshes.

The MasterDetailBinding doesn't have to extend the Binding interface, though it may be convenient (not sure yet about that). The binding is a piece of code that contains the logic for populating a detail DataModel every time the master tells it that it needs to be repopulated (load, refresh, when the current record changes, etc). It knows about the DataSource for the detail DataModel, as well as the fields in the master DataModel. With this information, it should be fully capable of fetching new values for the detail DataModel.

In summary, the DataSource handles all of the work of interacting with the data store (be it a database, file system, remote server, etc). The DataModel handles all of the navigational and master/detail logic. The MasterDetailBinding performs the actual logic necessary to load a detail DataModel. Custom loading scenarios may require a custom DataSource, or a custom MasterDetailBinding, but shouldn't require a custom DataModel.

This design maintains all of the benefits of the others we have discussed, but looks a lot cleaner to me. I'm not sure how transactions will play into this, maybe somebody with more experience in that department can chime in.

Richard

scottr
Offline
Joined: 2004-09-05
Points: 0

> same space as the DataSources and DataModels. This
> design has a new interface called DataSource. This
> DataSource is not to be confused with the
> javax.sql.DataSource interface. Rather, this is an
> abstract akin to the DataLoader. In fact, it will
> either contain a DataLoader or will supercede it. The
> Datasource handles the following responsibilities:
>
> 1)Responsible for loading data from the data store &
> populating DataModels
> 2)Knows which DataModels it must populate and how to
> get data for them
> 3)Responsible for saving DataModel data
> (persistence)
> 4)Can services multiple DataModels
> 5)Loads data asynchronously into the DataModel, and
> saves data asynchronously from the DataModel.
>
> The DataSource may also be responsible for
> transaction demarcation, but that's a different
> discussion.
>
> The DataModel handles the following
> responsibilities:
>
> 1)Contains public load, save, and refresh functions
> 2)Acts as the gui's main interface (the gui only
> interacts with DataModels and Bindings)
> 3)Contains a setMasterDataModel(DataModel, String
> key) method. Can only be linked with one master
> DataModel
> 4)Contains references to each DataModel for which it
> is the master. These DataModels are referenced by
> key. Contains a getDataModel(String key) method for
> retreiving a detail data model by key.
> 5)Has a setMasterDetailBinding(MasterDetailBinding b)
> method. Some other name could be used besides
> *Binding.
> 6)Has a setDataSource(DataSource ds) method
> 7)Propagates the proper events to detail DataModels.
> For instance, if the current record changes then the
> master DataModel informs detail DataModels of the
> change so they can repopulate themselves. It also
> propagates saves, loads, and refereshes.
>
> The MasterDetailBinding doesn't have to extend the
> Binding interface, though it may be convenient (not
> sure yet about that). The binding is a piece of code
> that contains the logic for populating a detail
> DataModel every time the master tells it that it
> needs to be repopulated (load, refresh, when the
> current record changes, etc). It knows about the
> DataSource for the detail DataModel, as well as the
> fields in the master DataModel. With this
> information, it should be fully capable of fetching
> new values for the detail DataModel.
>
> In summary, the DataSource handles all of the work of
> interacting with the data store (be it a database,
> file system, remote server, etc). The DataModel
> handles all of the navigational and master/detail
> logic. The MasterDetailBinding performs the actual
> logic necessary to load a detail DataModel. Custom
> loading scenarios may require a custom DataSource, or
> a custom MasterDetailBinding, but shouldn't require a
> custom DataModel.
>
> This design maintains all of the benefits of the
> others we have discussed, but looks a lot cleaner to
> me. I'm not sure how transactions will play into
> this, maybe somebody with more experience in that
> department can chime in.
>
> Richard

It looks like an interesting design. I think the key thing about it is that it places more of the functionality we have been discussing into the DataSource (which really needs a different name to avoid confusion :)), rather than into the DataModels themselves.

What we need to define is an event model, with a way of defining what triggers from a DataModel cause loading of partial graphs, etc. I think that a DataSource would typically always be contained within the same JVM, so comms between it and a DataModel could be frequent. But comms between a DataSource and its data store (which could be a local or remote database, or a web service, or ejb session beans, for example) should be able to be customised to be as infrequent and tight as needed.

Scott

rbair
Offline
Joined: 2003-07-08
Points: 0

> It looks like an interesting design. I think the key
> thing about it is that it places more of the
> functionality we have been discussing into the
> DataSource (which really needs a different name to
> avoid confusion :)), rather than into the DataModels
> themselves.

Ya, I'm not sure what to call it. I really like the name DataSource, except for the fact that there's already a javax.sql.DataSource! Its even a bigger mess if you think about the naming conventions coming from Delphi, C++ Builder, Kylix, or the like. They use the names "Connection" for the actual connection to the db, "DataSet" for an object that is a cross between my proposed DataSource and DataModel, and "DataSource" for some of the functionality of the DataModel crossed with the Bindings. The DataSource I've outlined is really a DataLoader/DataSaver, with the ability to dynamically notify the DataModel of updates in the underlying data store to boot (UpdateNotifier). So I'm not sure what to call it.

> What we need to define is an event model, with a way
> of defining what triggers from a DataModel cause
> loading of partial graphs, etc. I think that a
> DataSource would typically always be contained within
> the same JVM, so comms between it and a DataModel
> could be frequent. But comms between a DataSource and
> its data store (which could be a local or remote
> database, or a web service, or ejb session beans, for
> example) should be able to be customised to be as
> infrequent and tight as needed.
>
> Scott

I agree. DataSource must always be in the same JVM as the DataModel because the DataModel will make calls to the DataSource on the event-dispatch thread. I've committed a rough draft of DataSource to incubator cvs (https://jdnc-incubator.dev.java.net/source/browse/jdnc-incubator/src/jav...). Check it out and see where I'm headed with it. Basically, the DataModel will just call methods on the DataSource whenever it wants to be refreshed, or wants its data saved, or become associated with the DataSource. Some design gotcha's are listed in the class javadoc comments.

Richard

Message was edited by: rbair

Anonymous

I've tried to think this idea out and see if it would be useful for me and what I am trying to do. This is what I came up with.

>5) The DataModel has a method for saving, loading/refreshing, undo, redo, txStart, txEnd, deleteRecord, addRecord, etc.

A way to implement such methods (undo, redo, etc.) is the following: Undoable Interface, Redoable Interface, an undoStack, a redoStack, and implementations of the Interfaces.

Undoable Interface – has one method – undo()

Redoable Interface – has one method – redo()

SetValueURImpl – classes that implement both interfaces. The constructor would be passed the index or the row, the fieldName that was set, the old value (before the set) the new value, and the DataModel itself. This object is created every time that setValue is called. With this information, the undo and redo methods now have what they need to be implemented. Though it is important to remember that when the undo method is called, another object cannot be made and added to the undoStack, but when the redo method is called it does need to add another SetValueURImpl object on the undo stack. URImpl's need to be made for the other method calls such as addRowURImpl, deleteRowURImpl, etc.

undoStack – in the DataModel this stack keeps track of the operations done thus far, allowing a path to be made to step back undoing each operation. SetValueURImpl and such objects are stored in this stack.

RedoStack – similar to the undoStack, though when a new operation is performed this stack must be emptied. Objects are only stored in this stack until such a time.

As for the transactions, when the txEnd is called, both stacks are emptied. However when the txRollBack is called the undoStack is emptied, undoing each operation as it empties itself.

What do you think?
Gianni

Patrick Wright

Gianni

[snip]
>
> A way to implement such methods (undo, redo, etc.) is the following:
> Undoable Interface, Redoable Interface, an undoStack, a redoStack, and
> implementations of the Interfaces.
>
> Undoable Interface – has one method – undo()
>
> Redoable Interface – has one method – redo()
[snip]

I have seen people do this with a Command pattern--any action that takes
place is an instance of a Command, those are stacked up and in this case,
would be reversible. That is the critical point, however, whether the
operations are reversible.

If we are talking about something like a RowSet--let's say, just an
uninteresting tabular collection of data, named columns and rows--then
reversing an update to a row is pretty mechanical. If we are talking about
Beans, then I think we are forcing people to use beans like structs in
C--that is, the getFoo() and setFoo() methods only read and write the
values of variable foo with no side effects. That is OK, but only one type
of bean.

If that is the limitation, then I'd suggest we make that clear in the spec
"...the design supports an 'undo' for JavaBean mutators on a
field-by-field basis as long as field updates have no side effects besides
changing the value of the mutatable field."

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

Amy Fowler

jdnc-interest@javadesktop.org wrote:

> The JNDC binding framework will make sure the list is notified of the changes as soon as they are committed to the DataModel. The question is, when is the data committed? Aim, Mark, Ramesh? I don't remember seeing anything that specified when data is saved.

I assume by "committed" you mean the values are pushed from the
GUI components (in form or table) to the local data model (vs. pushing
updates back to server)?

The binding API is quite flexible on this in order to support various
strategies, however the JXForm component takes the push "all or nothing"
approach. In other words, sometimes if some of the fields contain invalid
edits, then you do not want ANY of the edits to be pushed to the model
until all errors have been corrected and the full record of data is
considered valid. This is one benefit of the push/pull model over
having "immediate" bindings which automatically push values to the
data model individually. But the current Binding API could be made to
work in a more immediate mode by pushing right after validating, which
happens typically when focus leaves the component.

Aim

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

> I assume by "committed" you mean the values are pushed from the GUI components (in form or table) to the local data model (vs. pushing updates back to server)?

Ya, that's correct. Sorry for the ambiguity :)

[snip]
> But the current Binding API could be made to work in a more immediate mode by pushing right after validating, which happens typically when focus leaves the component.

Thanks for the clarification. As a note, sometimes you may also want the push to happen after each keystroke/modification. For instance, I have a JList of items with an icon representing the state of the item (finished, partial, need attention). When an item is selected in the list, the detail panel shows the details for that item. When the user clicks the "completed" checkbox, the status in the JList needs to be updated immediately. Therefore, Bindings must support pushing of data immediately in some cases.

Richard

dhall
Offline
Joined: 2006-02-17
Points: 0

Patrick:

>> 1) The "List Editor", if I understand it right, is where a subset of
>> columns is shown in a list, and by selecting a row, I can edit the full
>> set of columns in the selected row. One efficiency some toolkits provide
>> is that on editing the details, the list is automatically updated because
>> the models are "shared" (toolkit dependent)

agreed that that's a generally common case. One variation is that the list contains only the data necessary to present the list values, and the rest of the record is lazily loaded. It can be written such that the list contains either keys or records, and when a key is resolved, it is replaced by the record. I'm not sure in JDNC terms who would be responsible for resolving the record from its displayed key, but it's one way to prevent the unnecessary loading of complex object graphs that you discribe in your point #2.

also -- ideally, the 'list' part of the list editor should be little more than a source of selection events. I'd like to be able to use trees, tables, lists, and even groups of radio/check boxes interchangably in that role (this is probably already true in JDNC to some extent -- I'm madly trying to push out a release of my own library and haven't had the time to delve into JDNC to the extent that I want to). A really cool extension would be a graphic with hot-spots (think HTML image maps): for example, the graphic map of the world that most environments use to select and set the time zone. It could also be a source of selection events and potentially be dropped into a ListEditor.

I understand the inherent conflict between wanting the list side to be as simple as possible, but that it may also have to take over responsibility for key resolution.

>> 2) When there is a 1:M relationship from one list to another (drill-down),
>> I've found it's important for the developer to have control over the
>> loading process. Great for the API to facilitate the linking so that
>> selecting a row in the master list loads the detail list. But generally
>> the API should allow this to be programmatic (say, event-driven) as
>> opposed to automatic, for cases where loading the child data is
>> slow/expensive.

>> 3) There are cases I've seen where the link between the two forms/lists is
>> in more than one column--hopefully whatever results from this discussion
>> won't tie us into a single column key link.

Absolutely true and extremely important to keep in mind.

>> 4) Generally would be good to have flexibility around
>> key linking relationship
>> loading behavior
>> transaction scope
>> persistence behavior

Like your list -- I would add initialization behaviour. Loading of invoice-type records is a difficult problem to describe in a framework like JDNC such that we get the flexibility to solve the variety of problems we'll encounter. However, describing how to create/initialize the various parts of the structure in an add mode is just as hard, or harder. (I point out this trap having fallen into it up to my -- neck). Issues to keep in mind include:

- is any part of the primary key is assigned on the server, and won't exist on the client until after the structure is persisted
- is any part of the primary key computed (ie, invoice line item numbers assigned sequentially within a single invoice)

I'd also add that it might be necessary to be able to describe cardinality restrictions: a properly formed structure might have a specific value of M in mind. (This, however, might also become a trap for the naive)

Dave

Patrick Wright

Dave-

This is all a good discussion I think. My sense is that the JDNC team (or
the community) will have to vote on what JDNC is about, what it is trying
to accomplish and facilitate. One reason why Access and FileMaker are so
popular is that they focus very clearly on a certain type of common
desktop, database-bound application. They make it easy to write forms,
lists, linked forms, to a database, and if you have done it once you can
really crank small apps out quickly.

I don't like them for serious work precisely because they are focused too
sharply on those goals, and trying to work around the embedded design is
difficult, options are poorly documented, etc. PowerBuilder was generally
more friendly about letting you get in the middle of the automated
operations, and, funny enough, it was criticized back in its day for not
being easy enough to punch out apps really quickly!

So, it is an open question how far JDNC wants to go in facilitating
master-detail forms of the type we are suggesting. For the most part IMO,
Swing is a pretty UI-centric framework, that is, not specifically oriented
at making specific user-goals (like editing small databases) easy. The
questions we've been raising on this thread could only be addressed
seriously if that there were larger goals for JDNC than just improving the
UI-usability factor of Swing.

Anyway
Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

Amy Fowler

Patrick Wright wrote:
>
> So, it is an open question how far JDNC wants to go in facilitating
> master-detail forms of the type we are suggesting. For the most part IMO,
> Swing is a pretty UI-centric framework, that is, not specifically oriented
> at making specific user-goals (like editing small databases) easy. The
> questions we've been raising on this thread could only be addressed
> seriously if that there were larger goals for JDNC than just improving the
> UI-usability factor of Swing.

Right now the Sun JDNC team is working on drafting a JDNC1.0 roadmap
proposal to try to nail down an initial target feature list, milestones,
and schedule. It is certainly our hope that JDNC will vastly simplify
the task of hooking Swing up to database data, which would include
some facilitation of master-detail relationships. Others in this
community have database knowledge that far exceeds that of our local
team, so all the discussion is very exciting and gratifying. We're
quite happy to see many of you take the lead on driving some of these
debates and we'll be quite happy to see concrete API proposals as well.

You should see the 1.0 roadmap proposal in the first half of september.

Aim

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Aim,

A long and much related conversation can be found at http://www.javadesktop.org/forums/thread.jspa?threadID=4127. That thread boiled down to a discussion of how to best represent Master/Detail relationships, and Scott makes some really good points.

Master/Detail is a complex subject. There are several approaches to the issue that should be considered:

1) When a master item is selected, some query should be executed against a database to get the detail items
2) When a master item is selected, some method should be called on an object (perhaps the master item itself) to get a collection of data which contains the detail items
3) When a master item is selected some filter should be applied to a collection or DataModel based on some criteria.

In other words, when the master item is selected some action must take place which will produce the detail items necessary.

Scott suggested having a DataModel contain a getDataModel(String fieldName) method that would return the detail DataModel. That detail data model would be notified by the parent DataModel whenever the current index changes so that the detail data model can refresh its items.

As far as the gui goes, check out the JMasterDetail component in the JGui code (http://sourceforge.net/projects/xom/). It's kind of the right idea, I think, but its still too complex. There's something wrong there, but at least it's a starting point.

Richard

Patrick Wright

Dave

Some extensions to your thoughts

1) The "List Editor", if I understand it right, is where a subset of
columns is shown in a list, and by selecting a row, I can edit the full
set of columns in the selected row. One efficiency some toolkits provide
is that on editing the details, the list is automatically updated because
the models are "shared" (toolkit dependent)

2) When there is a 1:M relationship from one list to another (drill-down),
I've found it's important for the developer to have control over the
loading process. Great for the API to facilitate the linking so that
selecting a row in the master list loads the detail list. But generally
the API should allow this to be programmatic (say, event-driven) as
opposed to automatic, for cases where loading the child data is
slow/expensive.

3) There are cases I've seen where the link between the two forms/lists is
in more than one column--hopefully whatever results from this discussion
won't tie us into a single column key link.

4) Generally would be good to have flexibility around
key linking relationship
loading behavior
transaction scope
persistence behavior

One thing I think JDNC promises to give us is some relief from the data
binding angle, which is currently a big hassle in this whole equation.

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Patrick,

> 1) The "List Editor", if I understand it right, is
> where a subset of
> columns is shown in a list, and by selecting a row, I
> can edit the full
> set of columns in the selected row. One efficiency
> some toolkits provide
> is that on editing the details, the list is
> automatically updated because
> the models are "shared" (toolkit dependent)

The JNDC binding framework will make sure the list is notified of the changes as soon as they are committed to the DataModel. The question is, when is the data committed? Aim, Mark, Ramesh? I don't remember seeing anything that specified when data is saved.

One approach is "auto save" where the binding would have a listener on the component and update the DataModel whenever the component changes (say, somebody clicks the checkbox). The other approach is to save it all up and don't send anything to the DataModel until after somebody clicks "ok" or at some other transactional boundry. A third approach is to auto save the data to the DataModel, but only have the DataModel notify everybody else of changes when the tx is committed (assuming we had some semantics for that).

> 3) There are cases I've seen where the link between
> the two forms/lists is
> in more than one column--hopefully whatever results
> from this discussion
> won't tie us into a single column key link.

Patrick, could you explain this one more? I don't really understand...

Thanks,
Rich

Patrick Wright


Hey Rich
>> 3) There are cases I've seen where the link between
>> the two forms/lists is
>> in more than one column--hopefully whatever results
>> from this discussion
>> won't tie us into a single column key link.
>
> Patrick, could you explain this one more? I don't really understand...
>

Suppose you have a master-detail 1:M explosion, for example, a list of
invoices and a list of line items. Clicking on an invoice in List 1 brings
up a list of line items in List 2.

If the backing datasource is a relational database, and we are looking at
data from two primary tables, one would expect that the pkey from the
Invoice table would be a fkey in the LineItem table. So, if we identify
InvoiceID as the pkey in our List 1, and say it is the linked column for
the query in List 2, then our framework can send appropriate events for
changes to InvoiceID as the selection changes, allowing us to even build
an auto-retrieve or auto-filter on List 2. MS Access among other tools has
this as a built in feature. It makes these kinds of lists easy to link.

However, there used to be a trend to have multi-column primary keys. Say
that our invoices were specific to a StoreBranch, identified by
StoreBranchID. Our Invoice table, being a child of StoreBranch, would
receive a 'migrated' primary key, and have StoreBranchID + InvoiceID as
its primary key (also possibly as an alternate key). LineItem might them
have StoreBranchID + InvoiceID + LineItemID.

This had some advantages--the linking between tables was pretty
predictable when you were coding, and the keys were generally meaningful.
It seems the trend has been to move away from this to single-column blind
pkeys.

However, to support that other form of data modelling, we should be able
to link two forms by the value of more than one column.

This may only matter if JNDC is to support a sort of auto-linking between
two models.

Is that more clear?

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

> This may only matter if JNDC is to support a sort of
> auto-linking between
> two models.
>
> Is that more clear?

Yes, thanks. I suspected you were talking about multi-column primary keys/foreign keys in the database sense, but terminology gets murky sometimes, especially when considering data stores other than databases.

It seems to me that the master/detail functionality should be layered just as the JXTable/JNTable functionality is layered. JDNC should be able to meet the needs of the most complex master/detail application as well as the small database centric kind.

For instance, an interface could exist that is responsible for linking two DataModels together. It gets notified when the master item changes so it can produce records in the detail.

Complex apps can do custom things such as peruse the filesystem or a JNDI source for child records while simple apps could use a predefined SqlDataModel (or some such thing) that has methods for automagically getting child records from a database.

Obviously JDNC should not become tied to one particular data model representation (such as sql). High level components can be created that do so, but the low level components must support a broad range of technologies.

Richard

Patrick Wright

Richard


> Complex apps can do custom things such as peruse the filesystem or a JNDI
> source for child records while simple apps could use a predefined
> SqlDataModel (or some such thing) that has methods for automagically
> getting child records from a database.
>
> Obviously JDNC should not become tied to one particular data model
> representation (such as sql). High level components can be created that do
> so, but the low level components must support a broad range of
> technologies.

I absolutely agree, with the caveat that hopefully there will be easy ways
to accomplish some of the most common types of model binding. A search on
google shows just how many people have asked about binding a JTable to a
SQL result set in the past. Other than that, yeah, I agree, the toolkit
should be agnostic about the datasource.

Patrick

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

All,

We've done a good job laying out the design goals, but we've failed to come to a design decision. I'm going to summarize what we've discussed, and then make a couple of proposals that we can debate. Hopefully at the end, we'll all be happy, post an RFE and finally have world peace :)

1) Flexible key linking relationship (Patrick)
2) Flexible loading behavior (Patrick)
3) Flexible transaction scope (Patrick)
4) Flexible persistence behavior (Patrick)
5) Flexible initialization behaviour (Dave)
6) Data model agnostic (Richard)

Ok, here's the first proposal. Its based on current JDNC design, with a modified Binding which allows binding to Objects, not just Components.

--------------------------------------------------

1) A DataModel is an island. It doesn't know about anything outside of itself, with the exception of listeners. It knows who has registered as a listener and will notify listeners when events happen (such as a change in the current record, or a new record is added, or a record removed, etc).

2) A Binding is used to bind a master DataModel to a detail DataModel. The Binding knows how to populate the detail DataModel by using the proper loader and performing the key linking/loading.

3) A Binding is used between each DataModel and the gui components bound to it. These bindings register themselves with the components, if necessary, to detect selection events. For instance, a ListBinding might fill a JList with a row for every record in the DataModel. It might also listen for selection events and set the current record index in the DataModel to the corrosponding selected element in the List.

4) MasterDetail bindings leave two methods unimplemented -- save & load. These methods are called by the binding automatically at the proper times. These tasks can be deferred to a Loader/Saver that performs the operation in a separate thread, or the implementation can be handled in a custom manner.

5) The DataModel has a method for saving, loading/refreshing, undo, redo, txStart, txEnd, deleteRecord, addRecord, etc.

-----------------------------------------------------

The second proposal is similar, but removes the MasterDetail binding. Instead, each DataModel has a 'setMaster' method. The save and load methods in DataModel will either contain the code itself (requiring an AbstractDataModel that would be extended for almost everything), or alternatively the DataModel would have a PersistenceHandler interface that would be provided to save/load information.

---------------------------------------------------

A third proposal is a combination of the first two: the underlying architecture would follow proposal #1, and a higher level abstraction along the lines of #2 would exist for such standard schemes as SQL based queries.

---------------------------------------------------

Thoughts?

Richard

dhall
Offline
Joined: 2006-02-17
Points: 0

A minor nit with #4 in your list. I believe that there really are four messages that the master may pass to the detail:

1) here's a (maybe partial) key, go load your data -- alternatively, here's your data, I loaded it for you. The key part can be optional, but in most cases comes from information in the master.

2) can you commit changes? (master has to coordinate commitment of changes with itself and any other details that exist), with an implied boolean response

3) commit your changes, with an implied success|failure response

4) rollback your changes (might be reload from cached values, might be reload from store) -- alternatively, reset to your last known good state.

another important point is that it is not safe for any particular UI piece to assume that it can only be used in the master role -- it should be perfectly valid for a 1:1 relationship to be embodied by a master:detail relationship where both the master and the detail are JXForms. This is implied by the idea that has come up earlier that from the master's POV, the detail is a DataModel.

If we're to support extended relationships (1:M:M...), then the previous point must also extend to the MasterDetail class (however it comes to be structured) as well.

rbair
Offline
Joined: 2003-07-08
Points: 0

Hey Dave,

>1) here's a (maybe partial) key, go load your data -- alternatively, here's your data, I loaded it for you. The key part can be optional, but in most cases comes from information in the master.

What I'm thinking is that the Binding has all the information it needs from the master simply by querying its fields, so its not necessary for the master to supply the binding with a key or with the loaded data.

Here's how I envision the loading process happening when a MasterDetailBinding is used to bind one DataModel to another. Lets say we have an abstract base class called AbstractMasterDetailBinding. Lets also say that we have 2 JavaBeanDataModels that we are binding together, orderDM and lineItemsDM, with lineItemsDM being the detail. We use a BeanMasterDetailBinding to do the actual binding, which extends AbstractMasterDetailBinding. The BeanMasterDetailBinding constructor takes the two data models as params, as well as a field name, "lineItems".

When the "load" or "pull" method is called, it will ask the orderDM for the value of "lineItems". It receives this value and determines it is a Collection. It then loads the lineItemsDM with the items in the collection. When asked to "save" or "push" items back into the ordersDM, it will take the items in the detail, place them in a collection, and call "setValue" on the ordersDM for the "lineItems" field.

Alternatively, if RowSetDataModels or ResultSetDataModels were used, we would use an SQLMasterDetailBinding. This binding's constructor would take the data models as params, as well as either a single fieldName or an array of fieldNames (or if both data models are RowSetDataModels it can query the RowSet for the keyColumns), and an sql statement.

When told to "load" or "pull", it will get the values from the orderDM corrosponding to the fieldNames, fill the values in the prepared statement constructed by the sql, execute the prepared statement and replace the rowset/resultset in the detail with the new resultset/rowset. Saves would follow the same idea.

Of course, anybody could write their own implementation of MasterDetailBinding allowing all kinds of interesting loading/saving scenarios. The key is that DataModels don't need to know anything about their environment, the Bindings take care of all of that.

>2) can you commit changes? (master has to coordinate commitment of changes with itself and any other details that exist), with an implied boolean response

The binding framework currently has an isValid method, is this sufficient?

>3) commit your changes, with an implied success|failure response

By commit do you mean commit to the DataModel or the database/data store? Binding has a push() method that would commit the data to the DataModel, and the proposed save() method would be the same as a commit() method. Or are you thinking of something else?

> 4) rollback your changes (might be reload from cached values, might be reload from store) -- alternatively, reset to your last known good state.

Good idea.

>another important point is that it is not safe for any particular UI piece to assume that it can only be used in the master role -- it should be perfectly valid for a 1:1 relationship to be embodied by a master:detail relationship where both the master and the detail are JXForms. This is implied by the idea that has come up earlier that from the master's POV, the detail is a DataModel.

I think we're ok here. If you have a DataModel -> MasterDetailBinding -> DataModel structure, you can support 1:1 relationships, though not restricted to it. Similarly, it should naturally follow that DataModel -> MasterDetailBinding -> DataModel -> MasterDetailBinding -> DataModel would be possible, allowing 1:M:M or any other conceivable kind of cardinality.

Rich

dhall
Offline
Joined: 2006-02-17
Points: 0

Rich

I'm going to have to jump around a bit -- I want to continue the discussion, but I'm likely to fall offline for a few days (got a little weather coming from over the bahamas)

> Hey Dave,
>
> >1) here's a (maybe partial) key, go load your data
> -- alternatively, here's your data, I loaded it for
> you. The key part can be optional, but in most cases
> comes from information in the master.
>
> What I'm thinking is that the Binding has all the
> information it needs from the master simply by
> querying its fields, so its not necessary for the
> master to supply the binding with a key or with the
> loaded data.
>
> Here's how I envision the loading process happening
> when a MasterDetailBinding is used to bind one
> DataModel to another. Lets say we have an abstract
> base class called AbstractMasterDetailBinding. Lets
> also say that we have 2 JavaBeanDataModels that we
> are binding together, orderDM and lineItemsDM, with
> lineItemsDM being the detail. We use a
> BeanMasterDetailBinding to do the actual binding,
> which extends AbstractMasterDetailBinding. The
> BeanMasterDetailBinding constructor takes the two
> data models as params, as well as a field name,
> "lineItems".
>
> When the "load" or "pull" method is called, it will
> ask the orderDM for the value of "lineItems". It
> receives this value and determines it is a
> Collection. It then loads the lineItemsDM with the
> items in the collection. When asked to "save" or
> "push" items back into the ordersDM, it will take the
> items in the detail, place them in a collection, and
> call "setValue" on the ordersDM for the "lineItems"
> field.
>
> Alternatively, if RowSetDataModels or
> ResultSetDataModels were used, we would use an
> SQLMasterDetailBinding. This binding's constructor
> would take the data models as params, as well as
> either a single fieldName or an array of fieldNames
> (or if both data models are RowSetDataModels it can
> query the RowSet for the keyColumns), and an sql
> statement.
>
> When told to "load" or "pull", it will get the values
> from the orderDM corrosponding to the fieldNames,
> fill the values in the prepared statement constructed
> by the sql, execute the prepared statement and
> replace the rowset/resultset in the detail with the
> new resultset/rowset. Saves would follow the same
> idea.
>
> Of course, anybody could write their own
> implementation of MasterDetailBinding allowing all
> kinds of interesting loading/saving scenarios. The
> key is that DataModels don't need to know anything
> about their environment, the Bindings take care of
> all of that.
>

I think you're off to a good start -- pulling the I/O out of the DataModels is probably the right way to go. We'll need to work out creation and deletion as carefully as you've defined retrieval, though.

> >2) can you commit changes? (master has to coordinate
> commitment of changes with itself and any other
> details that exist), with an implied boolean
> response
>
> The binding framework currently has an isValid
> method, is this sufficient?

Generally, but we have to avoid the trap of assuming that just because isValid may have returned true, then commit will succeed. There are any number of reasons why commit can fail.

In the case of MasterDetail, though, who's asking the question? Is it a MasterDetailBinding?

>
> >3) commit your changes, with an implied
> success|failure response
>
> By commit do you mean commit to the DataModel or the
> database/data store? Binding has a push() method that
> would commit the data to the DataModel, and the
> proposed save() method would be the same as a
> commit() method. Or are you thinking of something
> else?
>

For my part, I'm interested in the client/server relationship: having been part of an effort to create a commercial product similar to JDNC, I want to point out the traps that we fell into.

We had forms, tables, lists, trees, and queries working in manner similar to JDNC (obviously with some architectural details different, but similar enough). One trap we fell into was that each of our UI pieces knew how to load and save data on their own. When it came time to do MasterDetail, we found that we couldn't reuse any of them unless we rewrote a lot of the C/S overhead -- knowing how to load/save data as part of a larger transaction turned out to be enough of a different beast to be tricky.

> > 4) rollback your changes (might be reload from
> cached values, might be reload from store) --
> alternatively, reset to your last known good state.
>
> Good idea.
>

Also, we'll really need to route the commit success|failure information around: often, the data store will return more information to us as the result of an add or a change (database assigned ID#'s, record timestamps, computed fields, that sort of thing).

> >another important point is that it is not safe for
> any particular UI piece to assume that it can only be
> used in the master role -- it should be perfectly
> valid for a 1:1 relationship to be embodied by a
> master:detail relationship where both the master and
> the detail are JXForms. This is implied by the idea
> that has come up earlier that from the master's POV,
> the detail is a DataModel.
>
> I think we're ok here. If you have a DataModel ->
> MasterDetailBinding -> DataModel structure, you can
> support 1:1 relationships, though not restricted to
> it. Similarly, it should naturally follow that
> DataModel -> MasterDetailBinding -> DataModel ->
> MasterDetailBinding -> DataModel would be possible,
> allowing 1:M:M or any other conceivable kind of
> cardinality.
>
> Rich

Your last case is one of the difficult ones: to get DM -> Binding -> DM -> Binding -> DM to work, the Bindings have to be aware of both master and detail roles. The second binding in your list may not be able to assume control over I/O in the same way that the first can.

Dave

dhall
Offline
Joined: 2006-02-17
Points: 0

Amy

I believe your example really isn't a master detail in the way that most folks understand it. What you're showing is a list of flat records where selecting a single record shows the details of that record in an associated form.

The way that I'm familiar with the term 'master-detail' (or 'header-detail') from a long career in business application development is really the other way around: the record in question is not a flat record, but a record with an embedded 1:many relationship. There are variations related to the life-cycle of the different parts. For example, if we want to use a master-detail to represent an invoice for an order, then both the 'master' part (the order date, billing address, shipping address, total, tax, etc) and the 'detail' part (for each item ordered, the quantity, part-number, unit price, etc) must all be created/modified/deleted as part of a transaction, and (in general) the master part must exist in the datastore prior to any of the details being created (typically, each detail record contains the full primary key of the master as a part of its primary key).

Alternatively, for UI purposes, a monthly bill is also a form of a 'master-detail', but with different semantics related to the life-cycle of the different parts. In this case, the 'details' (all of the orders for a given billing period) come first, and are not in fact owned by the master, or even aware of its existance. The 'master' part will summarize the details, and may contain information linking the combined structure to the customer's payment status.

A third form is when both the master and the details exist prior to the relationship and persist beyond the termination of the relationship -- for example, the association between employees and departments, or (in a case I worked on several years ago) a sports team and its players. Both the master (deptartment/team) and the 'details' (employee/player) are permanent entities in the data store (in this case, the actual detail record is not the employee/player record, but a record that contains information about the association between the employee/player and the department/team for some time period).

I think it's important to rename what you're calling 'master-detail' to align more closely with general understanding. At a defunct company that was attemting to create a product similar to JDNC, we called the structure that you call 'master-detail' a 'list editor'. The purpose of the structure was to maintain a list of records: you could add new records, and modify or delete existing records. The important point is that the individual records had no specific relationship amongst themselves, other than the fact that they all shared a similar structure. In this case, the records maintained by the list editor can be simple structures (as in the current JDNC example) or more complex 1:m structures.

Dave Hall
http://jga.sf.net

Amy Fowler

Hi Dave,

Your point is well taken. The various interpretations of "master-detail"
(including my admittedly over-simplified one) remind me a little of the
sometimes-overly broad use of the "mvc" term :-)

A google on the term turns up the following pretty quickly:

"A one-to-many relationship, often referred to as a "master-detail"
or "parent-child" relationship, is the most usual relationship between
two tables in a database."

So I will indeed rename the demo appropriately so as not to mislead
those of you who deal in databases every day.

But JDNC should faciliate the construction of a true master-detail as well
and I hope you and other DB/app experts can help us solve that problem.

Aim

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

shahbaz
Offline
Joined: 2005-01-16
Points: 0

I've been looking at RowSet interfaces in jdk 1.5. I thought JoinRowSet (or Joinable) might be what we are looking for (similar to datarelation in .NET). As far as I can tell, there are a few problems with it.

-The setMatchColumn() is an part of joinable interface (in other words, there is one setMatchColumn() per rowset). What if we have one data model serving as source data for several 'master/detail' tables...all with different math columns? For example, if we make 'customers' the master table and 'orders' the detail table, then we could set customerID as the match column and it would allow us to get orders for a specific customer every time we click on a customer record. But what if we had another master/table where clicking on a customer record needed to bring up sales agents for that region...here the regionID might be the match column instead of customerID.

-It is also not clear from the documentation how one might deal with one-to-many relationship instead of simply merging two (or several) rowsets into a single rowset. In other words, shouldn't there be a mechanism which takes advantage of defined relationship and return a subset of rowset which matches a specific matchColumn (perhaps provided as a parameter). Again, imagine a function "RowSet getRowSet(MatchColumn column)".

-According to the api documentation, it doesn't look like RowSet can be used directly a JTable tablmodel anyway(hopefully I just missed something here...RowSet and List interface not being plugable directly into JTable seem like huge oversight...in presense of .NET databinding...such oversight is starting to seem like contempt for developers)

Any way, hopefully others have good ideas about this.

rbair
Offline
Joined: 2003-07-08
Points: 0

My understanding is that Joinable row sets are for merging two or more rowsets into one (just like a join in sql). So, no, I don't think it was designed for master detail at all.

However, don't despair! We shouldn't be thinking about joining row sets into master/detail relationships anyway. We should be thinking about joining DataModels in master detail relationships. A RowSet should be wrapped by a RowSetDataModel, which will then be arranged in Master/Detail relationships. How that exactly is going to happen is still a matter of debate (and actually, quite a complex issue).

naff
Offline
Joined: 2003-06-13
Points: 0

> jdnc-interest@javadesktop.org wrote:
>
> > Rather, the table should be bound to a data model.
> The table notifies the underlying data model whenever
> its selected row changes. The DataModel then notifies
> the detail data models that they need to be reloaded,
> and they in turn notify their bindings which
> repopulate the child tables.
>
> This is pretty much how JDNC handles it today,
> although, as many of you
> have pointed out, there are some definite issues
> (lazy fetch of detail,
> fields as object graphics, etc).
>
> The demo we showed at JavaOne was a master-detail. I
> promised weeks ago
> I'd send out the source code for this demo, and I'm
> wayyy late on that
> promise. I suppose I could skip making it pretty and
> just send it out
> in its demo-warted form....

We're waiting ... ;-)

netsql
Offline
Joined: 2004-03-07
Points: 0

I just want to add my -1 vote, I too think DataSource is a bad idea.
What does DataSource have to do with client or application, it's a DAO or JDO side class.
Especialy, having a data model know about the data source.

On the web, select is optimistic. Same is true of RiA, it should be optmistic. You don't do select for update.

Maybe start with a web application that you want to make JDNC. For example Apache's jPetStore (it has Struts and DAO, from iBatis.com).

I did JDNC master/detail just fine w/ just Collections (ArrayList/Map - which is how associative arrays in Flex work) and the development site is live.

.V

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
> I just want to add my -1 vote, I too think DataSource is a bad idea.
> What does DataSource have to do with client or application, it's a DAO or JDO side class.

Not necessarily. Sometimes it's more useful to access a DB in a generic
way, without even having to write DAO stuff.

Not all the latest stuff is the best for every job ;-)

> Especialy, having a data model know about the data source.

This is another problem, and I tend to agree, but maybe for different
reasons.

I'm looking in the code, and I think I see too much connection between
the two. In the code, it seems that the DataSource concept has been
migrated a lot into the DataModel.

For example (rbair's packages), in RowSetDataModel, this does not make
sense to me:

public void setRowSet(CachedRowSet data) {

Isn't the Rowset that is able to give the data?

Also:

public Object getValue(String fieldName) {
try {
return rs == null ? null : rs.getObject(fieldName);
^^^^^^^^^^^^^^^^^^^^^^^^^^

Here it's clear that the DataSource gives the *RowSet* and that
DataModel works on it... not what comes to my mind reading the name
"Source".

I still have to study the code some more, but it seems that the worst
thing about the DataSource is probably the name ;-)

Maybe calling it ConnectionHandler could be more descriptive...

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

netsql
Offline
Joined: 2004-03-07
Points: 0

3 tier architecure works good, and having a DAO lets you chose if you will use RowSet or JDO or something else.
btw, DAO does not execute in WebStart ;-); it's a remote DAO.

If JDNC is a view technology, like Flex/Lazlo, then it should know nothing in the model about DAO.
For example Struts formbeans have nothing in API or implementation about the DAO.

http://www.sandrasf.com/other/sandra/javadoc/org/sandra/api/ModelApi.html
This interface for a model is common for Table, Form, etc models.
It has an abstract populate().
How it gets implemented by users does not mater, as long as it's a model in the end and can be bound to a view.

(The other thing sandra does is defines a service API that any of the many services could be leveraged).

So RowSet and DataSource should not be part of the Model, if there is plans to make BluePrints w/ this.

Else, sandraSF will be an MVC version that just ignores the 2 tier parts of JDNC.

See this for more:
http://www.macromedia.com/devnet/flex/articles/struts_04.html

hth,
.V

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
...
> If JDNC is a view technology, like Flex/Lazlo, then it should know nothing in the model about DAO.
> For example Struts formbeans have nothing in API or implementation about the DAO.

AFAIU JDNC is not just a view technology, but I may be wrong.

> http://www.sandrasf.com/other/sandra/javadoc/org/sandra/api/ModelApi.html
> This interface for a model is common for Table, Form, etc models.

DataModel is in common too.

> It has an abstract populate().
> How it gets implemented by users does not mater, as long as it's a model in the end and can be bound to a view.

Actually, we are talking about these implementation. The concrete
DataModel classes and the DataSources are implementations of the DataModel.

...
> So RowSet and DataSource should not be part of the Model, if there is plans to make BluePrints w/ this.

How would you change the signature of the model Api classes?
How would you see a generic coder use the API?

Show me the code, I'm confused... :-)

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

netsql
Offline
Joined: 2004-03-07
Points: 0

DefaultTableModelExt and DefaultTableModelExt, etc. have no interface in common.

I would create a model interfaces as per link

"How would you change the signature of the model Api classes?"
http://www.sandrasf.com/other/sandra/javadoc/org/sandra/api/ModelApi.html

"How would I implement it?"
I have a live clean MVC sample runing on boardVU.com w/ master Detail. Run that site to see it. I will send you code right away.

What I learned from Struts is it does not mater how the users implement it. Just have a MyModelImpl.populate() as abstract.

In the populate, users would call their service.
(and MyModelImpl.populate() must be called from C=Action/SwingWorker/Listener.)

.V

Nicola Ken Barozzi

jdnc-interest@javadesktop.org wrote:
...
> What I learned from Struts is it does not mater how the users implement it. Just have a MyModelImpl.populate() as abstract.
>
> In the populate, users would call their service.
> (and MyModelImpl.populate() must be called from C=Action/SwingWorker/Listener.)

Ok, here is a problem with the current rbair's DataModel-DataSource
combo that I see after thinking about your posts:

DataModels are not interchangeable

I mean, if I want to replace a JavaBeanDataModel with a RowSetDataModel,
I can't do it in a simple way, because they need different method calls
after instantiation.

RowSetDataModel has these _extra_ methods apart being a DataModel:

public void setRowSet(CachedRowSet data)
public CachedRowSet getRowSet()
public void setMetaData(ResultSetMetaData rsmd)

public String getSelectSql()
public void setSelectSql(String selectSql)

public List getSelectParams()
public void setSelectParams(List selectParams)

public String getTableName()
public void setTableName(String tableName)

These are *not* in the basic DataModel interface, and tightly couples my
DB-access code with the DataModel. It also exposes methods that are
usually to be used only by the DataSource as public.

Here is where the 3-tier stuff you talk about is important. With it, the
developer can refactor all the data-getting code outside and keep the
DataModel creation clean.

As you see, instead of taking your code, I'm trying to refactor as
little as possible the existing rbair's code, so to minimize impact in
the existing system.

Ok, more thinking :-)

--
Nicola Ken Barozzi nicolaken@apache.org
- verba volant, scripta manent -
(discussions get forgotten, just code remains)
---------------------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: jdnc-unsubscribe@jdnc.dev.java.net
For additional commands, e-mail: jdnc-help@jdnc.dev.java.net

rbair
Offline
Joined: 2003-07-08
Points: 0

> Ok, here is a problem with the current rbair's
> DataModel-DataSource
> combo that I see after thinking about your posts:
>
> DataModels are not interchangeable
>
> I mean, if I want to replace a JavaBeanDataModel with
> a RowSetDataModel,
> I can't do it in a simple way, because they need
> different method calls
> after instantiation.
>
> RowSetDataModel has these _extra_ methods apart being
> a DataModel:
>
> public void setRowSet(CachedRowSet data)
> public CachedRowSet getRowSet()
> public void setMetaData(ResultSetMetaData rsmd)
>
> public String getSelectSql()
> public void setSelectSql(String selectSql)
>
> public List getSelectParams()
> public void setSelectParams(List selectParams)
>
> public String getTableName()
> public void setTableName(String tableName)
>
> These are *not* in the basic DataModel interface, and
> tightly couples my
> DB-access code with the DataModel. It also exposes
> methods that are
> usually to be used only by the DataSource as public.

You nailed it. This is one of the things that has been bugging me about the way I have DataSources and DataModels arranged. The other thing that I don't like is that you _have to know_ what kind of DataSource you have so that you can use a specific DataModel for it. For instance, a JavaBeanDataModel doesn't mix with a RowSetDataSource. There isn't any API to enforce this, you just have to know. Blah, there has to be a better way!

Nicola, last night I read your comment about DataSource being named wrong, and it got me thinking about a factory approach that had crossed my mind last month during a conversation with Jeanette. But first, I want to list some of the design criteria that fostered the current API and that needs to be supported by any other API.

1) Multiple DataModels sharing one connection to the Data Store.

2) Hiding multithreaded access to the Data Store

3) Some object needs to know how to get data from the Data Store (via an SQL query, or JDO query, or HTTP URL, or something).

In the rbair-incubator api, I have the DataSource implementing #1 & #2, and the DataModel implementing #3. This is what leads to the tight coupling that Bill and Jeanette are concerned about, and the "DataModel lock-in" that you mentioned here. However, looking over the current API I don't see any way around it. It doesn't make sense to move #3 over to the DataSource, because then a single DataSource cannot be used by multiple DataModels. Unless, of course, there was some kind of mapping in the DataSource specifying which query went with which DataModel.

No matter how I try to refactor it, I always end up with tight coupling between two objects just like the incubator DataSource and DataModel. The basic problem being #3. However this morning I thought of a slightly different approach (well, you might think it is radically different, but it was sort of evolutionary :)).

Lets say that we have an object called DataStoreConnection which will handle jobs #1, #2 and #3. This DataStoreConnection is essentially the DataSource from the incubator. Suppose that this DSC contains a list of Queries. These can be SQL queries, or JDO queries, or filesystem queries -- depending entirely on the type of DSC. These queries are mapped by name. So I might have a "GetAllCustomers" query. A DatabaseConnection would generate SQL query objects, and an HTTPConnection would generate HTTP get request query objects, for instance.

Now, lets have the DataModel contain a String for "queryName", and also a reference to the DataStoreConnection. Because queryies can be paramterized, we'll also need to have "setParam", "clearParam", and "getParam" methods.

Also, lets reintroduce the DataSource as a real DataSource. It is created by the DataStoreConnection, and wraps a RowSet, File, DOM tree, Collection, etc depending on the DSC.

Master/Detail is all still managed in the same manner as it currently is (for the most part, but I'm still working this part over in my mind).

The net result is:

1) The DataStoreConnection is the only class that knows about the Data Store in any way

2) A single query can be reused by multiple DataModels

3) We only need a single DataModel implementation because it no longer knows what kind of data it encapsulates

4) DataSource and Query implementations are hidden by the DataStoreConnection -- the developer doesn't care about the actual implementation

5) DataStoreConnections can be replaced without changing any DataModel/Binding code -- except for the caveat that the new DataStoreConnection must have queries with the same names as the old DataStoreConnection or some DataModels will not be populated with data

Its also important to realize that the role that the DataSource would be playing here is an implementation detail, and not something that the average developer would be concerned with. The average developer would create a DataStoreConnection, a couple of queries, and then a couple of DataModels.

Hmmmm..... what do you think?

Richard