-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
health HEALTH_WARN 64 pgs incomplete; 64 pgs stuck inactive; 64 pgs stuck unclean #187
Comments
That sounds like there aren't any OSD processes running and connected to the cluster. If you check the output of |
Hi ceph osd tree is showing output - id weight type name up/down reweight-1 0.09 root default and logs are showing Please suggest me. Thanks, |
Ah yes, you'll need at least 3 OSDs for Ceph to be happy and healthy. Depending on how your Crush map is configured, I forget the defaults, these OSDs will have to be on separate hosts. |
Hi I am a bit confused by this statement. "you'll need at least 3 OSD's to be happy and healthy". I followed the instructions (here: http://docs.ceph.com/docs/hammer/start/quick-ceph-deploy/) and once I get to the command "ceph health", the response is: "health HEALTH_ERR 64 pgs incomplete; 64 pgs stuck inactive; 64 pgs stuck unclean". That is when I install it... Ceph documentation clearly stated: I have attempted this install at least 3 times now and the response is the same every time. I am running 1 admin node, 1 monitor and 2 osd's on 4 VirtualBox Ubuntu 14.04 LTS VM's within Ubuntu 16 (previous attempt was within Ubuntu 14). The debug information is not very helpful at all. Ceph is also not writing to the /var/log/ceph/ location at all even after I set permissions ceph-deploy osd activate tells me that the osd's are active but ceph osd tree shows otherwise. (down) The config is read from /etc/ceph/cep.conf all the time (even though I install everything from my-cluster directory) which is incorrect. When I ran the install, the config was created in /home/user/my-cluster/ceph.conf yet it reads it from /etc/ceph/cep.conf. So I will attempt 3 OSD's now even though the site states otherwise... Any suggestions would be very helpful. Thanks, zd |
Hi, I just have the same problem as yours, and I have reinstalled Ceph for more than 3 times. I'm really upset. Have you figured it out? Expect your suggestions. |
Hi
If you are using ext4 file system, you need to place this in config global
section:
filestore xattr use omap
Restart and see if HEALTH_OK achieved.
Cheers
…On 02 Dec 2016 17:30, "LostSoul007" ***@***.***> wrote:
Hi, I just have the same problem as yours, and I have reinstalled Ceph for
more than 3 times. I'm really upset. Have you figured it out? Expect your
suggestions.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#187 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AU-ifrx6HvfwyP4KiNKNeRr-TMsVlB-7ks5rEDmkgaJpZM4DaskH>
.
|
Hi First, thank you so much for your suggestion!!! I reviewed the osd's log throughly and found the following words: Then I found this page: I just reinstalled Ceph again, and place the following words in config global It works!!! I'm so happy and I appreciate you reply very much!!! Thanks again! |
Hi
You are welcome.
I am glad you solved it.
Best Wishes
Zayne
…On 03 Dec 2016 14:36, "LostSoul007" ***@***.***> wrote:
Hi
First, thank you so much for your suggestion!!!
My file system is ext4, and I just did the thing you suggested, but it
seems to make no difference.
I reviewed the osd's log throughly and found the following words:
osd.0 0 backend (filestore) is unable to support max object name[space] len
osd.0 0 osd max object name len = 2048
osd.0 0 osd max object namespace len = 256
osd.0 0 (36) File name too long
journal close /var/lib/ceph/osd/ceph-0/journal
** ERROR: osd init failed: (36) File name too long
Then I found this page:
http://docs.ceph.com/docs/jewel/rados/configuration/
filesystem-recommendations/
I just reinstalled Ceph again, and place the following words in config
global
section:
osd_max_object_name_len = 256
osd_max_object_namespace_len = 64
It works!!! I'm so happy and I appreciate you reply very much!!!
Thanks again!
Best wishes~
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#187 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AU-ifp3r6BBzkWacMmp_yBc3BFqxEr7Vks5rEWIxgaJpZM4DaskH>
.
|
If you are using ext4 file system, you need to place this in config global section: vim /etc/ceph/ceph.confosd_max_object_name_len = 256 |
I'm having the same problem, however I am using the preferred xfs filesystem.. Any suggestions? [From monitor node i get the following] [From OSD node] [From Monitor node out of /var/log/ceph/ceph.log] |
f_redirected e754) currently waiting for peered |
After adding the following lines to /etc/ceph/ceph.conf file and reboot the system. Somehow, the issue still exists. osd_max_object_name_len = 256 ceph status
|
I met those ext4 file system issue before. I tried below settings in ceph.conf but finally gave up.
However, I follow this helpful document to deploy Ceph Jewel 10.2.9 on Ubuntu 16.04. Login to all OSD nodes and format the /dev/sdb partition with XFS file system. After that, I follow official document to deploy ceph on my ubuntu 16.04 servers. Everything works fine now. |
i have exactly same Problem with 14.04 LTS ext4. I tried almost everything and all suggestions above. But i'm still getting following on celp -s and next one on celp osd tree health HEALTH_ERR ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY |
After appended those lines into admin_node's ceph.conf:
then I think you should run |
Does this look like your error?
https://tracker.ceph.com/issues/17722
…On Mon, Sep 7, 2020 at 6:21 PM alamintech ***@***.***> wrote:
Please help me anyone.
[image: image]
<https://user-images.githubusercontent.com/68062764/92364757-5a4fae00-f115-11ea-90ee-a61246a87297.png>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#187 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALAZ46AN3QNEZNX6FWV6ZDSESJYRANCNFSM4A3KZEDQ>
.
|
After server reboot can't start osd service. Please help me any one. |
Hi,
I am having an issue of ceph health -
health HEALTH_WARN 64 pgs incomplete; 64 pgs stuck inactive; 64 pgs stuck unclean
Please suggest me what should I check.
Thanks,
Abhishek
The text was updated successfully, but these errors were encountered: