View Full Version : Maximum Rows and coloms in MY SQL Table
letom
02-26-2013, 06:57 AM
Hi,
What is the maximum number of Rows we can store in a table of MYSQL.?
For columns, I mean is it better to store upto 80, even if the limit is more than that off. Is it true ?
If iam storing Hexa - Trillions(unlimited) rows in a table .. How can i retrieve the results faster ? How can i do pagination in My SQL ?
Your answers and discussions are highly appreciated
Regards
Tom
james438
02-26-2013, 03:17 PM
From what I have read on the MySQL website there is no limit to the number of rows. It is based on the size allowed for your database. For example on GoDaddy I can have many different databases, but they are limited to 1GB each.
Scalability and Limits:
Support for large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 200,000 tables and about 5,000,000,000 rows. ref (http://dev.mysql.com/doc/refman/5.5/en/features.html)
E.7.4. Table Column-Count and Row-Size Limits
There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table. The exact limit depends on several interacting factors. ref (http://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html)
I would store whatever number of columns is easiest to keep track of.
I am not sure about trillions of rows. I can tell you that MySQL is very good at retrieving data very quickly compared to other web design languages.
djr33
02-26-2013, 08:45 PM
MySQL is very efficient, but the numbers you mentioned are extreme. I think it will be possible to find a way to make all of that work, but it might not be so fast then. If you're going to have that much content, then you may want to look into some kind of custom system involving caching or something-- just imagine what Google must do with all of the website data it searches.
I can't imagine the need for that much information without a huge project/company behind it. I'd suggest that if you start to actually reach the limits of technology like MySQL that you may want to hire a consultant to specifically deals with these things. I don't think there's any simple alternative (like different database software) that would be much better than MySQL.
letom
02-27-2013, 05:38 AM
@James @Daniel
Thanks for the messages from both of you.
From the statements of both of you i understood, the scalability is not a problem, it depends on the hardware and HDD Capacity. But the problem arise is in the case of retrieval of data , Huge amount of records reduces the speed of retrieval , to do retrieval faster we can do pagination or clustering.
any tutorials available for pagination and clustering ?
djr33
02-27-2013, 05:50 AM
to do retrieval faster we can do pagination or clustering.Not exactly. Certainly displaying all results would take a very long time. But that's not what I was talking about. Displaying a single result is very fast in MySQL, but it does need to search through many results. (Of course, it depends on your search algorithm; if you use "sort by" or "where", it will need to look at all rows; if you just look for the first row [based on the existing index] then that would be faster].) With a normal database, that won't take too long for one result. But if you literally have millions or billions of rows, that single operation will become slow.
So pagination is necessary even for a small database (it would be odd to display 1000 rows on an HTML page anyway), but it won't solve the problems for a huge database.
As for pagination, you can find many tutorials by googling "mysql pagination". Personally I like the one on http://www.php-mysql-tutorial.com/ (the site isn't well organized, but the tutorials are good). There are lots out there though, all with similar methods I think.
I'm not sure about clustering. I haven't heard of that for MySQL, but maybe it would help here.
Another option is caching-- storing the results from common queries. But that only works if you have lots of computations or a subset of results that will be cached-- if you frequently use every result, then you'd have a table equal in size to the original. So the use of caching must be strategic and varies by project.
With a DB of that size, you need to make efficient use of indexing and be careful what kinds of queries you run. You need to avoid full table scans at all costs - for example, running a query like SELECT something FROM column WHERE column LIKE '%this%'; on a trillion rows would be idiotic.
This is exactly what indexes are for - if you can look up rows in an index, you don't have to look through each table to find the rows you need. There's a book by Bill Karwin called SQL Antipatterns (http://pragprog.com/book/bksqla/sql-antipatterns) that I would wholeheartedly recommend. You might also start reading up at Rick James' website (http://mysql.rjweb.org/doc.php/ricksrots).
It might also be necessary to partition your DB across several machines to keep it efficient. What you need to do will depend on your specific application, however.
Daniel, your link is broken...?
djr33
02-27-2013, 07:22 PM
(Fixed the link. I guess it needed the www.)
This is exactly what indexes are for - if you can look up rows in an index, you don't have to look through each table to find the rows you need. There's a book by Bill Karwin called SQL Antipatterns that I would wholeheartedly recommend. You might also start reading up at Rick James' website. So it's fast as long as you use the existing indices in the order that they exist, right?
But it would be slow if you did anything else like ORDER BY or WHERE, right? It would be extra slow for WHERE x LIKE '%something%', but even just WHERE x=1 or ORDER BY X, it would need to look at all rows I think.
If WHERE or ORDER BY was indexed, then it would be fine:
-- assuming `x` is indexed
... WHERE x < 5
... ORDER BY x
However, if it's not indexed (or cannot benefit from an index), then it will force a full table scan (and on a trillion rows, will probably time out - maybe even lock things up):
-- can't index random orders
... ORDER BY RAND()
-- can index `x`, but it won't help with LIKE
... WHERE x LIKE '%something%'
djr33
02-27-2013, 09:16 PM
Right, ok. So if you had a database of movies and wanted to search by title, that would be problematic, unless that title was already the index (or corresponded to a numerical index).
You can have multiple indexes on a table. If you have frequent queries, you should create indexes for them if possible.
As far as your example goes, a numeric index wouldn't help with searching by title. You'd need to index the titles.
djr33
02-27-2013, 11:46 PM
You can have multiple indexes on a table. If you have frequent queries, you should create indexes for them if possible.Interesting. I haven't ever tried that. I guess it takes more storage space?
As far as your example goes, a numeric index wouldn't help with searching by title. You'd need to index the titles. I was imagining static data; true, if it changes the numbers would be all rearranged. (I think I was imagining an index on that, just unaware of that option.)
Interesting. I haven't ever tried that [multiple indexes]. I guess it takes more storage space?
yeah, sure. but if it saves you from doing a full table scan, it's certainly worth it. You shouldn't index "everything" (that is a waste), but anything that one of your queries use should be indexed, if possible. For example:
CREATE TABLE phoneNumbers(
user INT NOT NULL
,country VARCHAR(2) NOT NULL
,area VARCHAR(3) NOT NULL
,number VARCHAR(7) NOT NULL
,bestTimeToCall TIME NOT NULL
,PRIMARY KEY( country,area,number )
);
The primary key is an identifier for the row (all phone numbers *must* be different, so it's a natural choice - no need for an artificial "id" column). MySQL will use it to make sure you don't enter duplicate phone numbers. If you'd like to (for some reason), you can check if a phone number exists in the DB like so:
SELECT 1 FROM phoneNumbers WHERE country=? AND area=? AND number=?
And MySQL will use the primary key index to check. It won't look at the table at all.
Say you want to be able to look up all phone numbers belonging to a certain user -
SELECT country,area,number FROM phoneNumbers WHERE user=?
Add an index:
CREATE TABLE phoneNumbers(
user INT NOT NULL
,country VARCHAR(2) NOT NULL
,area VARCHAR(3) NOT NULL
,number VARCHAR(7) NOT NULL
,bestTimeToCall TIME NOT NULL
,PRIMARY KEY( country,area,number )
,INDEX( user )
);
Now you have two indexes, with extra overhead to maintain, but you use both of them regularly and the time they save you more than make up for it.
Now, say you want to be able to select phone numbers you can call before lunch:
SELECT country,area,phone FROM phoneNumbers WHERE HOUR( bestTimeToCall ) > 8 AND HOUR( bestTimeToCall ) < 12
You can index the `bestTimeToCall` column:
CREATE TABLE phoneNumbers(
user INT NOT NULL
,country VARCHAR(2) NOT NULL
,area VARCHAR(3) NOT NULL
,number VARCHAR(7) NOT NULL
,bestTimeToCall TIME NOT NULL
,PRIMARY KEY( country,area,number )
,INDEX( user )
,INDEX( bestTimeToCall )
);
But it won't do any good, because the index will include the column values, not the results of the HOUR() function. This is similar to the reason LIKE queries don't benefit from indexes.
letom
03-10-2013, 03:55 PM
Thanks for all of your replies, posts and discussions. i hope this thread will get a great record of traffic because important matters are discussed here by experts.
As the things are in such a way, i think database is extremely satisfied with user needs if we apply the accurate logic in Query processing. But, is there any problem arise in the branch of security and retrieval of data if we store some pieces of data in a file. How can we keep a file safe, what are precautions we need to take to protect a data file.
As the things are in such a way, i think database is extremely satisfied with user needs if we apply the accurate logic in Query processing. But, is there any problem arise in the branch of security and retrieval of data if we store some pieces of data in a file. How can we keep a file safe, what are precautions we need to take to protect a data file.
I'm not sure what you're asking, here (or if you're asking a question at all). Can you please clarify?
letom
03-11-2013, 07:06 AM
@traq
Sure!!
Can u have your suggestions for the following ?
Is there any problem arise in the branch of security and retrieval of data if we store some pieces of data in a file. How can we keep a file safe, what are precautions we need to take to protect a data file.
djr33
03-11-2013, 07:25 AM
You have three options:
1. Allow everyone to see the page/file (this is not secure).
2. Create a password-protected file or directory. (One way is using .htaccess; another is to use a server-side language like PHP or a CMS.) Only users with the password can see the file (but others may know that it exists, and can try to guess the password).
3. Create a protected file that is not available via HTTP. No one can save the file (but you can get it with FTP, and a server-side language like PHP can access its contents just like a database). There are two options:
--1) protect the file (eg, with .htaccess)
--2) store the file somewhere that is not accessible via HTTP, usually "above" the root directory (outside of your HTTP folder, probably).
[--3) put the file where no one will guess. Not really secure. Probably not a real option.]
Alternatively, you can use a file type that does not get sent to users. This is the case with PHP files. The code is parsed into HTML, and only that generated HTML is ever seen by the users. Technically, anything else in the file is completely secure. This is why database passwords are safe to store in your PHP configuration files:
<?php $database_password = 'password'; ?>
<html>
...They will see the HTML but not the code inside the PHP unless you want to show them.
(Just don't ever accidentally show them the file or accidentally display the password to them, and make sure your server is configured to never allow PHP downloads-- basically if PHP is working, it should be secure. That's true by default on many servers.)
Another question is why you would want to do this. But you can.
Powered by vBulletin® Version 4.2.2 Copyright © 2021 vBulletin Solutions, Inc. All rights reserved.