How Redis read hold big data from disk to RAM and re write it disk
I'm using PHP and MySQL with ubuntu enviroment.
I have table that have up to 110 record as member feed , it's very slow
SELECT query and UPDATE with Mysql.
table strucure:
member_feed_id int(12)
member_id int(12)
content_id int(12)
extra
I have social network application , in each second I have a lot of insert
queries to this table (using beanstalkd queue systems) and I have update
queries.
so I need to use no sql technique using Redis to migration all data from
this table from MySQL to no sql structure, Redis holds all data in memory
, but how Redis hold 110 million record in RAM at time and handle it , and
how to write from Redis memory to disk (from beanstalkd to this table).
is Redis hold all data from disk to RAM in each select or update or just
the matches result
is Redis good choice for no sql in this case or should I use monogodb
technique.
Thanks
No comments:
Post a Comment