• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2398
  • Last Modified:

SUGGESTIONS HOW TO SPEED UP AN ACCESS DATABASE PLEASE?

Hello everyone,

I have created an Access Database that is operating very slowly.  It takes about 30 seconds to open the main form.  Once I am inside the form, navigating through records is faster; takeing between 5-7 seconds to switch between records.

It is seperated into front-end/back-end, with the back-end being housed on a Novell Server, so data can be shared between 7 users.

All of the systems, with the exception of 1, are fairly good, with speeds of 2.5GHZ or faster, and 512 MB of ram.

The front-end itself is around 200 MB in size, and there are several tables (30 tables) linked to it.  Most of the linked tables are around 10-20MB, with 1 being 40, and the others 80, 140, 150, and 200 MB in size.  Overall, there is a lot of data in this database.

Although I realize its very hard to diagnose this without having the database in front of you, if any one can offer any suggestions on how I can significantly improve the speed, I'd be overjoyed (especially if they work).

I also want to add that I am really a noob when it comes to Access programming, and although I have been working on this database for a while, I'm sure I don't have even close to the programming knowledge that most of you would consider to be 'decent'.  As a result, most of the functionality has been set up 'visually' rather than with code.  There is some coding, but it is fairly limited.

Thanks in advance to anyone that can help.

:-)  :-)  :-)
0
Finn1
Asked:
Finn1
  • 17
  • 12
  • 7
  • +2
4 Solutions
 
NavicertsCommented:
Not sure what your setup looks like but you might want to think about moving what processing power you can to the server if you can.

For instance, if you joined five tables together to create one larger table in your access DB then you might want to join those five tables together on your server instead and create a view form them.  Then on your Access DB you can reference that view.

Not sure if this applies to your DB or not but just a thought.

Good Luck

-Navicerts
0
 
Alan WarrenCommented:
Hi Finn1

I've found the number one throttler in access apps when linking to external sources is use of function calls in SQL statements, this includes builtin functions such as DlookUp etc..

The reason for this is the clientside cursor restriction of the jet provider.

Effectively; this means that all sorting, grouping and filtering happens clientside(in the front-end machine)

So if you have a recordsource for a form like " Select * from Customer Where cusName Like 'Smi*' ", jet will return all Customer records to the client machine, where they will be filtered.

Now if you have a recordsource Like : " Select concatCusName(cusID) from Customer "
(concatCusName being a function call that concatenates the customer FName, OtherNames and LName), Jet will firstly return the entire customer table clientside; then FOR EACH execution of the function concatCusName, JET will go get the entire customer table again! So the customer table will be sent accross the LAN for each record in the the customer table. This is a little exagerated, but not too far from actuality.


Heres link to help optimise your DAO.
Forty-Three Ways to Make DAO Faster--Programming to DAO Using Microsoft Access
http://msdn.microsoft.com/archive/default.asp?url=/archive/en-us/dnaraccessdev/html/ODC_MakeDAOFaster.asp


Take care...


Alan


0
 
Gustav BrockCIOCommented:
Which network client (MS or Novell) are you using? The MS client is crippled.

Also, your frontend seems very large. Have you compress it? If that doesn't help, try importing all objects from the frontend into i new (empty) database.

If your backend tables have subdatasheets turned on, turn them off if you don't use them.

/gustav
0
Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

 
Finn1Author Commented:
Thanks for the responses thus far everyone!

I'm reading through them trying to understand them properly.  It sounds like people are saying the following (and please correct me if I am wrong):

1)  I have too much data in the front-end of the database, attributing to its large size
2)  The programming language that I do have in the front-end is slowing it down, because of the way it sorts, groups, and filters
3)  Have more work done on the server, rather than the front-end of the database.

By the way Cactus, the server we use is a Novell server, and I have tried to compress the database.  Most of the tables we use are linked, not imported.  There is only one large table with 3,000 records (only 5 fields), and I am not sure why this would cause it to be so large.  It doesn't make sense to me why our front-end is so large.  The database has a very large number of queries and forms, but it seems these would be unlikely the cause of its enourmous size.

If anyone could please explain how to have more work done on the server (back) end, and if this would indeed speed things up, it would be appreciated.

And I will read through that website.  Hopefully I will be able to comprehend enough to use what it is saying..lol

Please keep the suggestions rolling guys :-)
0
 
Finn1Author Commented:
Just re-reading Cactus' comments:

When you suggest importing frontend objects into a blank database, does that include forms, reports, queries, etc?  If I do that, what would be the next steps?
0
 
NavicertsCommented:
>>If anyone could please explain how to have more work done on the server (back) end, and if this would indeed speed >>things up, it would be appreciated.


On your joins if you can move them to the server it would be better.  You said you had 30 tables linked to your DB.  

Because i don't know what you are doing i will give a for instance.

In your access DB you join sevral tables together and do some moth to create a report with about 200 rows of data included sum'ed and calulated values.

Instead you could join and do your math on the server and create a view than you then link to your access DB and just spit out to your report.  This way no math is done on the front end (and the view should be held in RAM on your server so it should be fairly fast)

I hope I am a little more clear.  I am not too good at explaining things so if anyone else wants to give it a shot go for it :)


-Navicerts
0
 
Finn1Author Commented:
I think this picture is becoming clearer (although it is still fuzzy..sorry for my noobiness)

Ok, here are some silly questions....How would I do the joins on the server?  And how would I have the front-end access them?  When you say 'joins', are you referring to queries that reference data by joining them together (that is what I'm assuming).  For example, our database joins the major employers table with the community table (list of communities, with state, MSA, county reference Indexes), so that when pulling up a community record, the major employers are shown.

A little more information for you as well.  This database contains information on 4,000 communities across the U.S. and Canada.  It tracks information such as schools, major employers, wages that employers pay, and holds a significiant amount of demographics and annecdotal type info.

I hope I'm not making things seem worse than they really are.  My apologies for my lack of knowledge and the dumb questions....

0
 
Gustav BrockCIOCommented:
> By the way Cactus, the server we use is a Novell server ..

Yes that's fine. But which client do you use on the workstation? Does it popup Novell all over when you log into the network?

> Most of the tables we use are linked, not imported.  ..

You'll have to open the backend database and look if the subdatasheet option is turned on. If so it can bring a database on its nees.

> There is only one large table with 3,000 records ..

This is not a large table but a very small one.

> It doesn't make sense to me why our front-end is so large.

Pictures in forms and reports could be one source.

> If anyone could please explain how to have more work done on the server (back) end ..

There is no way. The backend is just a database file, not a database engine.

> When you suggest importing frontend objects into a blank database, does that include forms, reports, queries, etc?

Yes, everything.

> If I do that, what would be the next steps?

None except to press Ctrl+G and compile all modules. Close and notice the file size.

/gustav


0
 
Finn1Author Commented:
Hi Cactus,

Thanks again for your follow-up (same goes to everyone that is taking the time to respond)

The workstations are also Novell (they have pop-up when logging in).

How do I check if the subdatasheet option is turned on? Where do I find it? and what is a subdatasheet option?.......sorry, I am a complete noob

ODBC Pictures and forms are stored in a seperate table, and are only showed on one form.  There is another form with a picture, but it is only opened on request.  Pictures for that form are stored on the network and are accessed via a file path (that the database remembers-pulled that from the Northwind picture funciton) when requested.

People are referencing having the server do more work for me rather than the workstation.  Isn't that slower? And what do they really mean by that...if its not the back-end they're referring to.

I will try and transfer everything and see how it goes..........:-)

Thanks for all the advice!!!!!!!!
0
 
Gustav BrockCIOCommented:
> How do I check if the subdatasheet option is turned on? Where do I find it?

Open the table in design view. Choose properties and locate subdatasheets. Set to None.

> what is a subdatasheet option?.

You can specify a drill down option. Very useful if you need it but a resource hog.

/gustav
0
 
plqCommented:
I experienced considerable network problems with access 97 a few years back and here's what I found:

Firstly, move the system.mdw onto each client and join that local workgroup instead. That stops a lot of network bandwidth.

Secondly, make all the forms, reports, macros, modules and queries into a separate database from the data itself, and have the data attached to the main database. Then install each "client" database on each client machine and have only the data on a central server.
0
 
NavicertsCommented:
Yes, by join i am just talking about joining two tables.  If you join them in your access DB the users computer has to do the work of joining the tables together where as if you had a view already created on your server the users computer would not have to do the "work" of joining the tabels together.

This is done through creating views with your RDBMS.

-Navicerts
0
 
Finn1Author Commented:
Thanks Plq

On your comment, "make all the forms, reports, macros, modules and queries into a separate database from the data itself, and have the data attached to the main database", I think I have done that already by seperating the database into a front-end (queries, forms, reports, and table links go here), and a back-end (all the data tables housed on the network).  I may not properly read into what you're saying.

system.mdw is already on each user's computer, so I don't think that's the problem, but its good to know I have it set up correctly, so thanks for that.

Please feel free to correct me if I misuderstood what you were saying.

Thanks for your input :-)

Still looking for all the advice I can get, been looking at what has been said thus far, and nothing has worked signficantly yet.

0
 
NavicertsCommented:
How many queries do you have to run to view your reports and forms?  Or are you just displaying a table directly from your server?

-Navicerts
0
 
Finn1Author Commented:
Well, I'm not sure exactly, but I think I am running about 24 queries on my main form (from glancing through the tabs in the database), possibly a few more than that.  Reports run about 7 each (guesstimate).
0
 
Finn1Author Commented:
"This is done through creating views with your RDBMS" - not sure I understand view or how to create them.  I will do some research on that, but if you could explain a little bit more...sorry :-/  

For example, what is RDBMS?

0
 
plqCommented:
>> On your comment

The only extra bit is to put that front-end database on the client computer, which I guess you might have done already.

On the data volumes you're talking about, access is usually fast. I would double check that you're not missing a join in any combo queries (and hence getting loads of data back). I'll come back if I think of anything else.
0
 
NavicertsCommented:
plq is correct about Access usally being fast on the numbers you are talking about if the largest tables you are connecting are around 3000 records.  but im not sure if you have a small number of records in your linked tables or just the results you are querying for.

I don't know a lot about Novell in specific but im sure you can do the following

Basically what you can do is move the 24 queries out of your Access DB and onto your server.  For instance if you had 3 tables on your server with the common field "Customer ID".  Normally you would link all three tables to your Access DB then join them together with a query (or a number of queries) to pruduce a result (a report or a form).

Linked Tables
------------------
Customer ID  
Name
Address

Customer ID
Agent
Social Worker

Customer ID
Weight
Date


Instead, you could join the 3 tables together out of the Access DB and create a view of the 3 tables joined together leaving you with only one linked table to your Access DB...

Linked Table
---------------
Customer ID
Name
Address
Weight
Date
Agent
Social Worker


Now you have taken the processing power and moved it from the users PC to the server.  The thing to worry about when you do this is you might need more authority on the server then you might have now?

Ask you database administrator if you have one.


-Navicerts


0
 
NavicertsCommented:
Heres an example of a rondom create view statement...



CREATE VIEW supplied_parts AS
            SELECT *
            FROM p
            WHERE pno IN (SELECT pno FROM sp)
       WITH CHECK OPTION
0
 
Finn1Author Commented:
To understand the structure of the database a little more, I will explain the tables and their relationships.

CityMaster table - list of cities with columns indicating what state, county, and Metropolitan Statisitical Area  - this is a lookup table which links the various geography types (if applicable) 4,000 records
CityList table - table of city demographics (indexed to city code from CityMaster table) 4,000 records
MSAList table - table of MSA demographics (indexed to CityMaster table) 320 records
Schools table - table of post-secondary schools (index is city code from CityMaster table) 4,100 records
Major Employers table - table of major employers (index is city code joined to CityMaster table) 8,000 records
City50 table - lists all cities, and what other cities are within 50 miles of each other 181,000 records - so that the database knows what cities are within 50 miles of any particular city - for queries listing what employers or schools are within 50 miles of a community of interest
MSAGeo table - lists which cities are within each MSA - so that the database knows what cities are within any particular MSA - for listing what employers or schools are within the MSA that a community may be part of - 3,000 records
Notes - lists annecdotal information applicable to each community - 4,500 records
Contacts table - lists contacts and associated information, within each community that we speak with - 3,200 records

All of these tables are only linked to the database, and are not part of the front-end itself.  MSAGeo was the only table that was directly inside each front end, but that was changed today (in an attempt to speed this thing up).
0
 
NavicertsCommented:
I am going to have to retract moving any joins, now that i have seen how many records you are dealing with there really shouldn't be any problems with speed.

Perhaps there is something going on in your queries that is slowing it down.  

Perhaps you can see if one of the queries is slowing it down, run each one sepratly and see if any one of them is the problem.  Also try to simply open your linked table right from your tables screen, see how fast that coems up.

-Navicerts
0
 
Alan WarrenCommented:
Hi Finn1

Guess that brings us full circle, back to function calls in SQL.
Lets talk about your queries and the recordsources of forms and reports; and the rowsources of listboxes and comboboxes.
Particularly looking for queries that reference Form objects, eg.. "where cusname = Forms!MyForm.txtSometextbox"

So if you have a lot of cross-object population happening in your SQL(rowsources and recordsources) this will slow you down.

Previously mentioned, check SQL(queries, rowsources and recordsources) for Function calls, these can cripple a front-end/back-end environment.

Another consideration is over-indexing; indexes are only effective if used sparingly on chosen key fields, particularly fields that are subject to searches, cusLastName is good candidate, city is another. Same can be said for under-indexing, need balance here.

Someone mentioned something about the backend data source not being a data engine, thats absolutely correct, if the back-end is an MDB, it is a 'dumb catalog'. There is no point transferring your logic to the back-end in a Jet/MDB environment, on the other hand, if your back-end is a true client/server dbms like sql server, then you should transfer as much of your logic to the back-end as possible, in the form of stored procedures and/or views, which are executed serverside.

I think your problem is going to be in the SQL that populates your forms, reports, combos and listboxes.
Possibly some issues with indexing may contribute to your speed, or lack thereof.

Failing that, then you are looking down the barrel of some corruption issue, and creating a new front-end and sucking all the objects from your old-front-end into the new one will be the quickest and best fix for this.

How  is the speed if you run a front-end on the same machine that houses the back-end data?



Alan
0
 
Finn1Author Commented:
OK guys,

Here's an update to what I've been doing as a result of your great advice thus far.  I went through the queries 1 at a time for every form and subform, and by changing the queries to include only the fields needed for forms, I've managed to reduce the time it takes to open the main form, from 27 seconds, down to 15 seconds.

The one query that is still slowing things down immensely, is the 50-mile radius table (180,000 records), that identifies which major employers are within 50 miles of any community.  I'll try and explain how its set up.

1)  The 20-mile table includes every city in the database, approx 3,400, and identifies in seperate rows which cities are within 50 miles.  For example, Phoenix, AZ would be listed as a primary community, and if 20 cities (such as Tempe, Mesa, Peoria, etc..) are within the 50-mile radius, there would be 20 rows of data for the city of Phoenix.  Tempe, Mesa, and Peoria would also be included as main cities as well, with the citiies within the 50-mile radius listed in individual rows.  A row of data would be for example showing that mesa is within 50 miles of phoenix // 1234 (identification code for phoenix), phoenix, az, 8943 (identification code for Mesa), mesa, az

2) Then I joining the employers table to the 50-mile table, creating a table that shows which employers are within 50-miles of any city

3) Then filtering this table to include only data that is for a specific community of interest (the community that is presently being shown in the main form) by using a child/master join

This process takes the most time, and I am curious to know if anyone has any better suggestions how I might speed this up?

Getting close guys...thanks for everything thus far
0
 
NavicertsCommented:
Post the SQL for that query

-Navicerts
0
 
Alan WarrenCommented:
my thoughts exactly :)
0
 
plqCommented:
Also post the exact structure of the City50 table

I would expect it to perform OK if you have

City1ID int
City2ID int
Distance int

and an index on whichever city column you "go in on"
0
 
Alan WarrenCommented:
I gotta leave it with  you fellas, 3:10 am here, gotta catch some zzzzzz's

Good luck...

Alan
0
 
Finn1Author Commented:
Argh,

Update number 2....whereas it was pulling up the main form in 10 secs yesterday, it has gone back to pulling it up in 27 secs, even with the changes I mentioned earlier (limiting fields it was pulling during queries).  Looking into this and I'll post an update, but this is very frutrating :-( :-(
0
 
NavicertsCommented:
Another thing to keep in mind while you are working on it is network traffic.  Over lunch or after evreyone left for the day it may run faster, may not.

Just a possible reason for the time change that you wouldn't be able to see :)

Keep us posted!

-Navicerts
0
 
Finn1Author Commented:
THE SQL for the query that locates employers within 50-miles, a distance column is not used because this list was verified previously using a GIS.

SELECT Site50MileInfo.[COMMUNITY FIPS], EmployerCompetition.*, Site50MileInfo.[COMMUNITY NAME], Site50MileInfo.STATE, Site50MileInfo.[CWI50MI NAME], Site50MileInfo.[CWI50MI POP98], Site50MileInfo.[COMMUNITY NAME], CityList.POPWI50MILERADCY, EmployerCompetition.[# of Employees], EmployerCompetition.[Function], EmployerCompetition.[Employer Name], EmployerCompetition.[GeneralType], CityList.LABORFORCECY
FROM CityList INNER JOIN (Site50MileInfo INNER JOIN EmployerCompetition ON Site50MileInfo.[CWI50MI FIPS] = EmployerCompetition.FipsCode) ON CityList.CITYFIPSCO = Site50MileInfo.[COMMUNITY FIPS]
WHERE (((EmployerCompetition.[Function])<>"Major Employer") AND ((EmployerCompetition.[GeneralType])<>"Major Employer"))
ORDER BY EmployerCompetition.[# of Employees] DESC;


SITE50MILEINFO - is the table listing FIPS, COMMUNITY NAME, CWI50MI FIPS, CIWI50MI NAME

Fips/cwi50mifips - is the identification code for each community
Community Name - community of interest
CWI50MI NAME - name of a city with 50 miles of primary community of interest
0
 
Finn1Author Commented:
I was thinking the same thing.  Do you think it would make that much of a difference (network traffic)?
0
 
NavicertsCommented:
I am not really sure, where i work it would not.  Hopefully someone else will comment on this but i think it depends on your network and how many users are on it.

-Navicerts
0
 
Finn1Author Commented:
Well, there are approximately 300-400 users on our network, so I think that may be it.  It was around 6:00 pm when I got the form to pull up in 10 secs, and that may have just been because most people had gone home (slackers...lol).  Any suggestions on how to get around the network traffic thing.
0
 
plqCommented:
On the network side, check your users are not on a simple hub. Hubs + MSAccess = disaster. You need them on a switch. Must admit my networking knowledge is useless but I think hubs will spray each packet to each network card whereas a switch will route it only to the intended ip ? errr dont shoot me if I got that wrong !

Comments/questions on the query:
1. whats the data type of the FIPS codes, you would want them to be ints for best performance
2. Is there an index on Site50MileInfo.[COMMUNITY FIPS]
3. Access doesn't use indexes on <> where criteria. If you can make it an = or IN clause it can use indexes


0
 
plqCommented:
Could you completely scrap the city50 table and replace it with X and Y coordinates of each city on the city table ? Then could you work out the 50 mile rule with a bit of pythagoras ?
0
 
NavicertsCommented:
One way I have dealt with this type of problem is to run jobs in batch while everyone is at home then have daily reports for them instead of real time.  For obvious reasons this will only work in some cases and may not be applicable to your situation.

In the past I have had jobs that require several queries to be run and it took probably about 5 minutes to get the final result.  I changed the system so that at 5AM every day there would be a batch job run and the reports would generate and print out in PDF format and be saved to a folder where they would replace the days before report and the file would ultimately be linked to a web page that the users could access.

In addition to this process I made available a button for updating all the PDF's that the users could press knowing that it would take 5 minutes most of them just used the daily reports. We also didn't have to worry about changes to our data by the hour or minute.

Of course, this route is the quitter’s way! Maybe you can speed it up &#61514;

-Navicerts
0
 
Finn1Author Commented:
Thanks for the feedback :-)

Not sure the batch thing would work.  What the 50-mile tables are for is giving users of the database the ability to see what employers are within 50 miles of a particular community.  So if you looked up Phoenix (for example), you would be able to see employers that are located within the limits of the city of phoenix, as well as any community within 50-miles.  What this also does it reduces redundancy.  Without this feature, you would have to enter that same employer in EVERY community from which it is located within 50 miles.  That would be redundant and very time consuming.

Sorry if I'm explaining something you may already understand, just kinda making sure it makes sense to you all.

I would be very interested in the Lon/Lat thing.  I can get the long/lat coordinates of any community's centroid.  What kind of calculation would this be, and would it really be faster?

On the network thing, I guess I'll know by 6 pm today if the network traffic is really a big cause of the problem (all the slackers will be gone by then...lol).

Thanks for all the input so far guys.  This has been really helpful and I'm learning more with everything you experts have to say.  :-)
0
 
NavicertsCommented:
Yup, it wouldnt work to run a batch for your application :)

-Navicerts
0
 
plqCommented:
Lon/Lat is easy I think.

complete guess but something like:

   Where sqrt((a.x1 - b.x1) ^ 2 + (a.y1 - b.y1) ^ 2) <= 50

a = city1 table b=city2 table

This kind of number crunching is nothing compared to performing a table lookup
0
 
Finn1Author Commented:
FOUND THIS ON THE INTERNET, FOR ANY THAT ARE INTERESTED:

Formula and code for calculating distance based on two lat/lon locations

The following is the formula I use in perl to do the calculations.  Perl expects all of the angles to be in radians.

return &acos(cos($a1)*cos($b1)*cos($a2)*cos($b2) + cos($a1)*sin($b1)*cos($a2)*sin($b2) + sin($a1)*sin($a2)) * $r;
Where:

$a1 = lat1 in radians
$b1 = lon1 in radians
$a2 = lat2 in radians
$b2 = lon2 in radians
$r = radius of the earth in whatever units you want

The values I use for radius of the earth are:

3963.1 statute miles
3443.9 nautical miles
6378 km

To convert the decimal degrees to radians use the following perl.

# define an accurate value for PI

$pi = atan2(1,1) * 4;

#
# make sure the sign of the angle is correct for the direction
# West an South are negative angles
#

$degrees = $degrees * -1 if $direction =~ /[WwSs]/;
$radians = $degrees*($pi/180);
To convert degree minutes and seconds to decimal degrees use the following perl formula.

$dec_deg = $deg + ($min + $sec/60)/60;
Finally, there is no acos function in perl so here is the function I use.  I don't remember where I got the math for this.

# subroutine acos
#
# input: an angle in radians
#
# output: returns the arc cosine of the angle
#
# description: this is needed because perl does not provide an arc cosine function
sub acos {
   my($x) = @_;
   my $ret = atan2(sqrt(1 - $x**2), $x);
   return $ret;
}
0
 
Finn1Author Commented:
Alan,

On overindexing, it is possible I have been overindexing.  I've been looking through the tables I've created and its indexed most fields by default.  Should I go through each table and unindex those fields that I don't use to sort, join, etc?
0
 
plqCommented:
Given the amount of effort put into this thread and the fact that virtually every possible cause and workaround was offered, including some considerable thought and consideration for the asker, I would say that points should be award and that a delete is not appropriate ?
0
 
Finn1Author Commented:
Hi Plq,

Actually, none of the solutions here were actually the problem (which was solved).  The problem was in the database design itself.  The 30 tables that are linked to the front-end, were all in seperate MDB files on the server.  I had started a different thread about relinking the database and it was inadvertantly solved there.  Some tech experts felt I should put all of the tables into 1 back-end MDB (instead of the 20+ they were split into) so that I could institute some code that allowed the user to point to one MDB to relink the tables.  Not only did that work, but it sped up the time loading the form from 27 seconds...to 3 seconds.  Every other aspect of the database is faster now as well.

The large number of MDB is something I inherited, and did not realize that this was the problem; but thankfully this has been fixed.

Thanks again for all of your efforts to solve the problem.  I'm sure you would have realized the problem if I had noted that the 30 linked tables were in seperate MDB files.

I am going to split up the points to all that tried to help.  Thanks again everyone!
0

Featured Post

NFR key for Veeam Agent for Linux

Veeam is happy to provide a free NFR license for one year.  It allows for the non‑production use and valid for five workstations and two servers. Veeam Agent for Linux is a simple backup tool for your Linux installations, both on‑premises and in the public cloud.

  • 17
  • 12
  • 7
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now