For example, if I am creating a 3 layer application (data / business / UI) and the data layer is grabbing single or multiple records. Do I convert everything from data layer into generic list/collections before sending to the business layer? Is it ok to send data tables? What about sending info back to the data layer?
If I use objects/lists, are these members of the Data or Business layers? Can I use the same objects to pass to and from the layers?
Here is some pseudo code:
object user with email / password
in UI layer, user inputs email / password. UI layer does validation and then I assume creates a new object user to pass to business layer which does further validation and passes same object to Data layer to insert record. Is this correct?
I am new to .NET (come from 8+ years of ASP VBScript background) and trying to get u开发者_如何学JAVAp to speed on the 'right' way to do things.
I am updating this answer because comments left by Developr seem to indicate that he would like a bit more detail.
The short answer to your question is Yes you'll want to use class instances (objects) to mediate the interface between your UI and your Business Logic Layer. The BLL and DAL will communicate as discussed below. You should not be passing SqlDataTables or SqlDataReaders around.
The simple reasons as to why: objects are type-safe, offer Intellisense support, permit you to make additions or alterations at the Business Layer that aren't necessarily found in the database, and give you some freedom to unlink the application from the database so that you can maintain a consistent BLL interface even as the database changes (within limits, of course). It is simply good programming practice.
The big picture is that, for any page in your UI, you'll have one or more "models" that you want to display and interact with. Objects are the way to capture the current state of a model. In terms of process: the UI will request a model (which may be a single object or a list of objects) from the Business Logic Layer (BLL). The BLL then creates and returns this model - usually using the tools from the Data Access Layer (DAL). If changes are made to the model in the UI, then the UI will send the revised object(s) back to the BLL with instructions as to what to do with them (e.g. insert, update, delete).
.NET is great for this kind of Separation of Concerns because the Generic container classes - and in particular the List<> class - are perfect for this kind of work. They not only permit you to pass the data but they are easily integrated with sophisticated UI controls like grids, lists, etc. via the ObjectDataSource class. You can implement the full range of operations that you need to develop the UI using ObjectDataSource: "Fill" operations with parameters, CRUD operations, sorting, etc.).
Because this is fairly important, let me make a quick diversion to demonstrate how to define an ObjectDataSource:
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server"
OldValuesParameterFormatString="original_{0}"
SelectMethod="GetArticles"
OnObjectCreating="OnObjectCreating"
TypeName="MotivationBusinessModel.ContentPagesLogic">
<SelectParameters>
<asp:SessionParameter DefaultValue="News" Name="category"
SessionField="CurPageCategory" Type="String" />
</SelectParameters>
</asp:ObjectDataSource>
Here, MotivationBusinessModel is the namespace for the BLL and ContentPagesLogic is the class implementing the logic for, well, Content Pages. The method for pulling data is "GetArticles" and it takes a Parameter called CurPageCategory. In this particular case, the ObjectDataSource returns a list of objects that is then used by a grid. Note that I need to pass session state information to the BLL class so, in the code behind, I have a method "OnObjectCreating" that lets me create the object and pass in parameters:
public void OnObjectCreating(object sender, ObjectDataSourceEventArgs e)
{
e.ObjectInstance = new ContentPagesLogic(sessionObj);
}
So, this is how it works. But that begs one very big question - where do the Models / Business Objects come from? ORMs like Linq to SQL and Subsonic offer code generators that let you create a class for each of your database tables. That is, these tools say that the model classes should be defined in your DAL and the map directly onto database tables. Linq to Entities lets you define your objects in a manner quite distinct from the layout of your database but is correspondingly more complex (that is why there is a distinction between Linq to SQL and Linq to Entities). In essence, it is a BLL solution. Joel and I have said in various places on this thread that, really, the Business Layer is generally where the Models should be defined (although I use a mix of BLL and DAL objects in reality).
Once you decide to do this, how do you implement the mapping from models to the database? Well, you write classes in the BLL to pull the data (using your DAL) and fill the object or list of objects. It is Business Logic because the mapping is often accompanied by additional logic to flesh out the Model (e.g. defining the value of derived fields).
Joel creates static Factory classes to implement the model-to-database mapping. This is a good approach as it uses a well-known pattern and places the mapping right in the construction of the object(s) to be returned. You always know where to go to see the mapping and the overall approach is simple and straightforward.
I've taken a different approach. Throughout my BLL, I define Logic classes and Model classes. These are generally defined in matching pairs where both classes are defined in the same file and whose names differ by their suffix (e.g. ClassModel and ClassLogic). The Logic classes know how to work with the Model classes - doing things like Fill, Save ("Upsert"), Delete, and generate feedback for a Model Instance.
In particular, to do the Fill, I leverage methods found in my primary DAL class (shown below) that let me take any class and any SQL query and find a way to create/fill instances of the class using the data returned by the query (either as a single instance or as a list). That is, the Logic class just grabs a Model class definition, defines a SQL Query and sends them to the DAL. The result is a single object or list of objects that I can then pass on to the UI. Note that the query may return fields from one table or multiple tables joined together. At the mapping level, I really don't care - I just want some objects filled.
Here is the first function. It will take an arbitrary class and map it automatically to all matching fields extracted from a query. The matching is performed by finding fields whose name matches a property in the class. If there are extra class fields (e.g. ones that you'll fill using business logic) or extra query fields, they are ignored.
public List<T> ReturnList<T>() where T : new()
{
try
{
List<T> fdList = new List<T>();
myCommand.CommandText = QueryString;
SqlDataReader nwReader = myCommand.ExecuteReader();
Type objectType = typeof (T);
PropertyInfo[] typeFields = objectType.GetProperties();
if (nwReader != null)
{
while (nwReader.Read())
{
T obj = new T();
for (int i = 0; i < nwReader.FieldCount; i++)
{
foreach (PropertyInfo info in typeFields)
{
// Because the class may have fields that are *not* being filled, I don't use nwReader[info.Name] in this function.
if (info.Name == nwReader.GetName(i))
{
if (!nwReader[i].Equals(DBNull.Value))
info.SetValue(obj, nwReader[i], null);
break;
}
}
}
fdList.Add(obj);
}
nwReader.Close();
}
return fdList;
}
catch
{
conn.Close();
throw;
}
}
This is used in the context of my DAL but the only thing that you have to have in the DAL class is a holder for the QueryString, a SqlCommand object with an open Connection and any parameters. The key is just to make sure the ExecuteReader will work when this is called. A typical use of this function by my BLL thus looks like:
return qry.Command("Select AttendDate, Count(*) as ClassAttendCount From ClassAttend")
.Where("ClassID", classID)
.ReturnList<AttendListDateModel>();
You can also implement support for anonymous classes like so:
public List<T> ReturnList<T>(T sample)
{
try
{
List<T> fdList = new List<T>();
myCommand.CommandText = QueryString;
SqlDataReader nwReader = myCommand.ExecuteReader();
var properties = TypeDescriptor.GetProperties(sample);
if (nwReader != null)
{
while (nwReader.Read())
{
int objIdx = 0;
object[] objArray = new object[properties.Count];
for (int i = 0; i < nwReader.FieldCount; i++)
{
foreach (PropertyDescriptor info in properties) // FieldInfo info in typeFields)
{
if (info.Name == nwReader.GetName(i))
{
objArray[objIdx++] = nwReader[info.Name];
break;
}
}
}
fdList.Add((T)Activator.CreateInstance(sample.GetType(), objArray));
}
nwReader.Close();
}
return fdList;
}
catch
{
conn.Close();
throw;
}
}
A call to this looks like:
var qList = qry.Command("Select QueryDesc, UID, StaffID From Query")
.Where("SiteID", sessionObj.siteID)
.ReturnList(new { QueryDesc = "", UID = 0, StaffID=0 });
Now qList is a generic list of dynamically-created class instances defined on the fly.
Let's say you have a function in your BLL that takes a pull-down list as an argument and a request to fill the list with data. Here is how you could fill the pull down with the results retrieved above:
foreach (var queryObj in qList)
{
pullDownList.Add(new ListItem(queryObj.QueryDesc, queryObj.UID.ToString()));
}
In short, we can define anonymous Business Model classes on the fly and then fill them just by passing some (on the fly) SQL to the DAL. Thus, the BLL is very easy to update in response to evolving needs in the UI.
One last note: If you are concerned that defining and passing around objects wastes memory, you shouldn't be: if you use a SqlDataReader to pull the data and place it into the objects that make up your list, you'll only have one in-memory copy (the list) as the reader iterates through in a read-only, forward-only fashion. Of course, if you use DataAdapter and Table classes (etc.) at your data access layer then you would be incurring needless overhead (which is why you shouldn't do it).
In general, I think it is better to send objects rather than data tables. With objects, each layer knows what it is receiving (which objects with what properties etc.). You get compile time safety with objects, you can't accidentally misspell a property name etc. and it forces an inherent contract between the two tiers.
Joshua also brings up a good point, by using your custom object, you are also decoupling the other tiers from the data tier. You can always populate your custom object from another data source and the other tiers will be none the wiser. With a SQL data table, this will probably not be so easy.
Joel also made a good point. Having your data layer aware of your business objects is not a good idea for the same reason as making your business and UI layers aware of the specifics of your data layer.
There are nearly as many "correct" ways to do this as there are programming teams in the world. That said, what I like to do is build a factory for each of my business objects that looks something like this:
public static class SomeBusinessObjectFactory
{
public static SomeBusinessObject FromDataRow(IDataRecord row)
{
return new SomeBusinessObject() { Property1 = row["Property1"], Property2 = row["Property2"] ... };
}
}
I also have a generic translation method that I use to call these factories:
public static IEnumerable<T> TranslateQuery(IEnumerable<IDatarecord> source, Func<IDatarecord, T> Factory)
{
foreach (IDatarecord item in source)
yield return Factory(item);
}
Depending on what your team prefers, the size of the project, etc, these factory objects and translator can live with the business layer or data layer, or even an extra "translation" assembly/layer.
Then my data layer will have code that looks like this:
private SqlConnection GetConnection()
{
var conn = new SqlConnection( /* connection string loaded from config file */ );
conn.Open();
return conn;
}
private static IEnumerable<IDataRecord> ExecuteEnumerable(this SqlCommand command)
{
using (var rdr = command.ExecuteReader())
{
while (rdr.Read())
{
yield return rdr;
}
}
}
public IEnumerable<IDataRecord> SomeQuery(int SomeParameter)
{
string sql = " .... ";
using (var cn = GetConnection())
using (var cmd = new SqlCommand(sql, cn))
{
cmd.Parameters.Add("@Someparameter", SqlDbType.Int).Value = SomeParameter;
return cmd.ExecuteEnumerable();
}
}
And then I can put it all together like this:
SomeGridControl.DataSource = TranslateQuery(SomeQuery(5), SomeBusinessObjectFactory.FromDataRow);
I´d add a new layer, ORM Object Relational Mapping, with the responsability to transform data from Data layer into bussiness objects collections. I think that using objects in your bussiness model is the best practice.
Whatever means you use to pass data between the layers of your application, just be sure that the implementation details of each layer do not leak into the others. You should be able to change how the data in the relational database is stored without modifying any of the code in the business objects layers (other than serialization of course).
A tight coupling between the design of the business objects and the relational data model is extremely irritating and is a waste of a good RDBMS.
There are a lot of great answers here, I would just add that before you spend a lot of time creating translation layers and factories it's important to understand the purpose and future of your application.
Somewhere, whether it's a config mapping file, a factory, or directly in your data/business/ui layer some object/file/class/etc is going to have to have knowledge of what transpires between each layer. If swapping out layers is realistic, then creating the translation layers is useful. Other times, it just makes sense to have some layer (I usually make it in business) know about all Interfaces (or at least enough to broker between data and ui).
Again, this isn't to say all of that stuff is bad, just that it's possible YAGNI. Some DI and ORM frameworks make this stuff so easy that it's stupid not to do it. If you're using one, then it probably makes sense to get it for all it's worth.
I strongly suggest that you do it with objects. Some other way suggests that only interfaces are publics while your implementations are internal, and you expose your methods through a factory of your object, then couple your factories with a façade to finally have a single and unique entry point to your library. Then, only data objects goes through your façade, so you always know what to expect inside as outside your façade.
This way, any UI could call your library's façade, and the only thing that would remain to code is your UI.
Here's a link which I find personally very interesting that explains in summary the different design patterns: GoF .NET Design Patterns for C# and VBNET.
If you'd rather a code sample illustrating what I'm stating, please feel free to ask.
The application that I'm working on now is fairly old (in .NET terms) and uses strongly typed datasets to pass data between the data layer and the business layer. In the business layer, the data in the datasets is manually "or mapped" to business objects before being passed to the front end.
This is probably not a popular design decision though because strongly typed dataset were always somewhat controversial.
精彩评论