PDA

View Full Version : Linked table structure idea



OTWarrior
11-16-2010, 02:54 AM
Just something that occurred to me as I was working, more of a discussion that a problem.

We use a frontend to backend connection. each copy of the frontend is held on each users local computer (in the D: drive), and has linked tables pointing to a folder on a file server (nothing special, just a normal server for holding documents).

We do have 60 users adding and amending records. At times (such as logging out time) the database grinds to a halt due to network traffic (as the users computers are restricted by the server being held up). If the backend files are made local (as I often do when developing) the speed is massively increased.

My idea is this:
Rather than link to the server for the backend, the key tables (the ones doing most of the adding and amending of records) are duplicated on each local computer. When the user loads the program in the morning, all tables are updated.

Whilst actually using the database, they are altering their local files. At critical points (such as when they finish alerting a number of related records) the system would run an update query for the specific records, copying over the the local version of the records and updating the server version.

When another user wants to access that same record that day, it would pull from the server and save to their local version, then they can continue as normal.

Like I said before, just an idea that came into my head. A few questions though:

1) If everyone is running update queries on the same table at the same time, is it likely to be slower, or even corrupt more easily?

2) Could such a system work, without having older data override the newer data?

3) If two people alter the same record at the same time, what would happen?

Any thoughts of this would be much appreciated.

orange
11-16-2010, 08:05 AM
There is a 2 part scenario at this link that may be useful.
http://www.databasedev.co.uk/multi-user-applications.html

OTWarrior
11-17-2010, 04:09 AM
That was quite useful, thank you.

We do have our database as shared, but do not use record level locking.

Do you know of any problems with using record level locking. I know my colleague said we can't use it, but forget the reason why.

Does the "Local, then update network copy" idea sound like a better solution in your opinion?

orange
11-18-2010, 06:31 AM
Here is an article that deals with more of the "theoretical" aspects of data base sharing and related issues causing deadlocks/timeouts etc. Not a direct answer to your questions, but explains/describes some of the issues and causes.

http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBIQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1. 72.4304%26rep%3Drep1%26type%3Dpdf&rct=j&q=sree%20isloor%20deadly%20embrace&ei=yijlTIuJAoumnwe03YC9DQ&usg=AFQjCNGysmqU4LmWaqQpQ-brIBMaj6cttg&cad=rja

CreganTur
11-18-2010, 11:05 AM
ADO Transactions may help with the processing of all of those users requests.

Honestly, if it was me, I would change my backend to an actual SQL Server environment since you have so many users and use an Access Project as the fontend- that way you can have the same GUI for your users but a more robust and reliable backend for the traffic.

HiTechCoach
11-19-2010, 08:26 PM
You might consider using an SQL server back end. You can still use an MDB. I find them better than a ADP. Just switching to an SQL server does not guarantee it will be faster. I have actually sen it set slower with an SQL server Back End.

This sounds more like a design issue. I have some Access apps that as split with almost 100 users. The key was to properly design the system to minimize the amount of data that is move over the network.


Does the "Local, then update network copy" idea sound like a better solution in your opinion?
I would only consider this as a very last resort. It opens up a huge hole for "data issues". IMHO, it would be a lot easier and more practical to look at the design of the application and fix the performance issues.