Redis error: CROSSSLOT Keys in request don't hash to the same slot #4180
-
I am clueless on redis. We have a new OM instance running in AWS with ElasticCache, which was setup by a freelancer. Anyone has any idea on the following exception?
local.xml: <!-- @link https://github.com/colinmollenhour/Cm_RedisSession -->
<session_save>db</session_save>
<redis_session>
<host>tls://xxx.serverless.apse1.cache.amazonaws.com</host> <!-- Specify an absolute path if using a unix socket -->
<port>6379</port>
<password></password> <!-- Specify if your Redis server requires authentication -->
<timeout>2.5</timeout> <!-- This is the Redis connection timeout, not the locking timeout -->
<persistent></persistent> <!-- Specify unique string to enable persistent connections. E.g.: sess-db0; bugs with phpredis and php-fpm are known: https://github.com/nicolasff/phpredis/issues/70 -->
<db>0</db> <!-- Redis database number; protection from accidental loss is improved by using a unique DB number for sessions -->
<compression_threshold>2048</compression_threshold> <!-- Known bug with strings over 64k: https://github.com/colinmollenhour/Cm_Cache_Backend_Redis/issues/18 -->
<compression_lib>gzip</compression_lib> <!-- gzip, lzf, lz4, snappy or none to disable compression -->
<log_level>1</log_level> <!-- 0 (emergency: system is unusable), 4 (warning; additional information, recommended), 5 (notice: normal but significant condition), 6 (info: informational messages), 7 (debug: the most information for development/testing) -->
<max_concurrency>6</max_concurrency> <!-- maximum number of processes that can wait for a lock on one session; for large production clusters, set this to at least 10% of the number of PHP processes -->
<break_after_frontend>5</break_after_frontend> <!-- seconds to wait for a session lock in the frontend; not as critical as admin -->
<fail_after>10</fail_after> <!-- seconds after which we bail from attempting to obtain lock (in addition to break after time) -->
<break_after_adminhtml>30</break_after_adminhtml>
<first_lifetime>600</first_lifetime> <!-- Lifetime of session for non-bots on the first write. 0 to disable -->
<bot_first_lifetime>60</bot_first_lifetime> <!-- Lifetime of session for bots on the first write. 0 to disable -->
<bot_lifetime>7200</bot_lifetime> <!-- Lifetime of session for bots on subsequent writes. 0 to disable -->
<disable_locking>0</disable_locking> <!-- Disable session locking entirely. -->
<min_lifetime>60</min_lifetime> <!-- Set the minimum session lifetime -->
<max_lifetime>2592000</max_lifetime> <!-- Set the maximum session lifetime -->
<log_exceptions>0</log_exceptions> <!-- Log connection failure and concurrent connection exceptions in exception.log. -->
</redis_session>
<!-- @see https://github.com/colinmollenhour/Cm_Cache_Backend_Redis -->
<cache>
<backend>Cm_Cache_Backend_Redis</backend>
<backend_options>
<server>tls://xxx.serverless.apse1.cache.amazonaws.com</server> <!-- or absolute path to unix socket -->
<port>6379</port>
<persistent></persistent> <!-- Specify unique string to enable persistent connections. E.g.: sess-db0; bugs with phpredis and php-fpm are known: https://github.com/nicolasff/phpredis/issues/70 -->
<database>0</database> <!-- Redis database number; protection against accidental data loss is improved by not sharing databases -->
<password></password> <!-- Specify if your Redis server requires authentication -->
<force_standalone>0</force_standalone> <!-- 0 for phpredis, 1 for standalone PHP -->
<connect_retries>1</connect_retries> <!-- Reduces errors due to random connection failures; a value of 1 will not retry after the first failure -->
<read_timeout>10</read_timeout> <!-- Set read timeout duration; phpredis does not currently support setting read timeouts -->
<automatic_cleaning_factor>0</automatic_cleaning_factor> <!-- Disabled by default -->
<compress_data>1</compress_data> <!-- 0-9 for compression level, recommended: 0 or 1 -->
<compress_tags>1</compress_tags> <!-- 0-9 for compression level, recommended: 0 or 1 -->
<compress_threshold>20480</compress_threshold> <!-- Strings below this size will not be compressed -->
<compression_lib>gzip</compression_lib> <!-- Supports gzip, lzf, lz4 (as l4z), snappy and zstd -->
<use_lua>0</use_lua> <!-- Set to 1 if Lua scripts should be used for some operations (recommended) -->
</backend_options>
</cache> |
Beta Was this translation helpful? Give feedback.
Answered by
kiatng
Sep 3, 2024
Replies: 3 comments 1 reply
-
cache and sessions shouldn't be in the same db |
Beta Was this translation helpful? Give feedback.
0 replies
-
I changed
|
Beta Was this translation helpful? Give feedback.
1 reply
-
There are 3 possible options with ElasticCache in cluster mode:
|
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
kiatng
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There are 3 possible options with ElasticCache in cluster mode: