We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The container with elasticsearch does not start. What I do:
git clone https://github.com/wolfbolin/crack-elasticsearch-by-docker.git cd crack-elasticsearch-by-docker version=7.17.2 chmod +x crack_and_install.sh ./crack_and_install.sh
Next I attach the full output:
7.17.2: Pulling from library/elasticsearch 4d32b49e2995: Pull complete 5e2cc520c590: Pull complete 5c035969826b: Pull complete 62b4de0976fe: Pull complete e94bfd37447f: Pull complete cbc23596b5ff: Pull complete 2dee55702edd: Pull complete dcc2b2e29f24: Pull complete a79124ff2153: Pull complete Digest: sha256:f51e653b5dfca16afef88d870b697e087e4b562c63c3272d6e8c3c92657110d9 Status: Downloaded newer image for elasticsearch:7.17.2 docker.io/library/elasticsearch:7.17.2 7.17.2: Pulling from library/kibana 4d32b49e2995: Already exists 5bfa92e782fa: Pull complete e6e475533f4b: Pull complete cbce5af872cf: Pull complete 436799be3926: Pull complete 770503d8e3ad: Pull complete 870b1639d81b: Downloading [> ] 3.734MB/287.3MB 870b1639d81b: Pull complete bda94123f641: Pull complete 3b44c0933f43: Pull complete 9f984d55c676: Pull complete 9192321c0211: Pull complete 0c8dc20e0401: Pull complete 2de49a43c45f: Pull complete Digest: sha256:214302162d75a7c8ade156b3298f3e12ba275bc537503109f13a8caac33fbef0 Status: Downloaded newer image for kibana:7.17.2 docker.io/library/kibana:7.17.2 Run for version: 7.17.2 Error response from daemon: No such container: elastic-crack Error response from daemon: No such container: elastic-crack [+] Building 25.2s (13/13) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 502B 0.0s => [internal] load metadata for docker.io/library/openjdk:17-jdk-buster 1.6s => [internal] load metadata for docker.io/library/elasticsearch:7.17.2 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [stage-1 1/6] FROM docker.io/library/openjdk:17-jdk-buster@sha256:fcc1f2be2f2361da9a9f65754864218fe5fed86ea085ca39779114c91edfd41a 17.9s => => resolve docker.io/library/openjdk:17-jdk-buster@sha256:fcc1f2be2f2361da9a9f65754864218fe5fed86ea085ca39779114c91edfd41a 0.0s => => sha256:fcc1f2be2f2361da9a9f65754864218fe5fed86ea085ca39779114c91edfd41a 549B / 549B 0.0s => => sha256:85bed84afb9a834cf090b55d2e584abd55b4792d93b750db896f486680638344 50.44MB / 50.44MB 3.0s => => sha256:5fdd409f4b2bf3771fac1cbc2df46d1cbb4b800b154cfc33e8e0cd23e67ab3e0 7.86MB / 7.86MB 1.7s => => sha256:e462f39f589395ce928db22f3c8ea2ca83ade3f78a0227133b5ffebfa0289ab0 1.59kB / 1.59kB 0.0s => => sha256:629cbc2df1641d7818780396ecb90bbe05033f53847f8dda69fa4575f6255211 5.50kB / 5.50kB 0.0s => => sha256:fa3069e6cecf9d5e43ace0a925b53849b2fd189ef41883d45d8f2e12213c2c06 10.00MB / 10.00MB 6.3s => => sha256:4ee16f45eff97cbe98b74c63ca2375b2c6b2592cd765a86df098f731baab13ad 51.84MB / 51.84MB 6.3s => => extracting sha256:85bed84afb9a834cf090b55d2e584abd55b4792d93b750db896f486680638344 3.1s => => sha256:d8a4f10afb0a6941e51896151ab4251cf04b9c754b9aec772d13a9bbaa34f9eb 13.92MB / 13.92MB 4.6s => => sha256:7106c9fc6ffa84170f3131c4bbcb8cdf306f4b834699dd302c2260468e3e4503 187.63MB / 187.63MB 14.7s => => extracting sha256:5fdd409f4b2bf3771fac1cbc2df46d1cbb4b800b154cfc33e8e0cd23e67ab3e0 0.4s => => extracting sha256:fa3069e6cecf9d5e43ace0a925b53849b2fd189ef41883d45d8f2e12213c2c06 0.2s => => extracting sha256:4ee16f45eff97cbe98b74c63ca2375b2c6b2592cd765a86df098f731baab13ad 3.0s => => extracting sha256:d8a4f10afb0a6941e51896151ab4251cf04b9c754b9aec772d13a9bbaa34f9eb 0.6s => => extracting sha256:7106c9fc6ffa84170f3131c4bbcb8cdf306f4b834699dd302c2260468e3e4503 3.2s => [baseline 1/1] FROM docker.io/library/elasticsearch:7.17.2 0.1s => [internal] load build context 0.0s => => transferring context: 2.19kB 0.0s => [stage-1 2/6] WORKDIR /crack 0.4s => [stage-1 3/6] COPY --from=baseline /usr/share/elasticsearch/lib /usr/share/elasticsearch/lib 0.2s => [stage-1 4/6] COPY --from=baseline /usr/share/elasticsearch/modules/x-pack-core /usr/share/elasticsearch/modules/x-pack-core 0.0s => [stage-1 5/6] COPY build_crack_jar.sh /crack 0.0s => [stage-1 6/6] RUN apt update && apt install -y zip 4.7s => exporting to image 0.4s => => exporting layers 0.3s => => writing image sha256:5408ad7c4e51affd35e90e8dd9b2111432d0a5a1d45228c90b3527e5dc623639 0.0s => => naming to docker.io/library/elastic-crack:7.17.2 0.0s Runtime environment branch: 7.17 version: 7.17.2 http_proxy: https_proxy: License.java:555: error: cannot find symbol builder.field(Fields.STATUS, LicenseService.status(this).label()); ^ symbol: method status(License) location: class LicenseService 1 error cp: cannot stat 'License.class': No such file or directory Crack finish. Start elastic once and wait running(60s) vm.max_map_count = 262144 f28bd7ff8c41d3786945f07b481d370f143d766d5303162dbb148663189b28f7 5d47e35e1c3cac5f1f7d8bb165e9dcf84ff761e4b1f0a375c9bb1baf6deb1f27 Successfully copied 20kB to /storage/test/crack-elasticsearch-by-docker/data/ Successfully copied 44.5kB to /storage/test/crack-elasticsearch-by-docker/config/ Copy files success. Creating elasticsearch elastic0 elastic0 39d7e504fbc299cb4475a59f5800215a1880b1c33d949149a0308b52e37e54ab Create elasticsearch done Creating kibana 17cf03dbf2c2f58922bfb202633e48d50e761f8d25bc21c79d6ccdbf5af431a0 Create kibana done Wait for create enrollment token(30s) {"type": "server", "timestamp": "2025-01-16T11:49:46,887Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "heap size [2gb], compressed ordinary object pointers [true]" } {"type": "server", "timestamp": "2025-01-16T11:49:46,912Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "node name [5d47e35e1c3c], node ID [V-mec6brTayMC2OLsYOlbg], cluster name [docker-cluster], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]" } {"type": "server", "timestamp": "2025-01-16T11:49:53,003Z", "level": "INFO", "component": "o.e.x.m.p.l.CppLogMessageHandler", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "[controller/219] [Main.cc@122] controller (64 bit): Version 7.17.2 (Build 3f454aebd9a30c) Copyright (c) 2022 Elasticsearch BV" } {"type": "server", "timestamp": "2025-01-16T11:49:53,586Z", "level": "INFO", "component": "o.e.x.s.a.s.FileRolesStore", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]" } {"type": "server", "timestamp": "2025-01-16T11:49:54,681Z", "level": "INFO", "component": "o.e.i.g.ConfigDatabases", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/usr/share/elasticsearch/config/ingest-geoip] for changes" } {"type": "server", "timestamp": "2025-01-16T11:49:54,682Z", "level": "INFO", "component": "o.e.i.g.DatabaseNodeService", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "initialized database registry, using geoip-databases directory [/tmp/elasticsearch-7705129155367704431/geoip-databases/V-mec6brTayMC2OLsYOlbg]" } {"type": "server", "timestamp": "2025-01-16T11:49:55,470Z", "level": "INFO", "component": "o.e.t.NettyAllocator", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]" } {"type": "server", "timestamp": "2025-01-16T11:49:55,505Z", "level": "INFO", "component": "o.e.i.r.RecoverySettings", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]" } {"type": "server", "timestamp": "2025-01-16T11:49:55,558Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "using discovery type [zen] and seed hosts providers [settings]" } {"type": "server", "timestamp": "2025-01-16T11:49:56,086Z", "level": "INFO", "component": "o.e.g.DanglingIndicesState", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually" } {"type": "server", "timestamp": "2025-01-16T11:49:56,749Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "initialized" } {"type": "server", "timestamp": "2025-01-16T11:49:56,749Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "starting ..." } {"type": "server", "timestamp": "2025-01-16T11:49:56,758Z", "level": "INFO", "component": "o.e.x.s.c.f.PersistentCache", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "persistent cache index loaded" } {"type": "server", "timestamp": "2025-01-16T11:49:56,759Z", "level": "INFO", "component": "o.e.x.d.l.DeprecationIndexingComponent", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "deprecation component started" } {"type": "server", "timestamp": "2025-01-16T11:49:56,955Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "publish_address {172.30.0.2:9300}, bound_addresses {[::]:9300}" } {"type": "server", "timestamp": "2025-01-16T11:49:56,965Z", "level": "INFO", "component": "o.e.x.m.Monitoring", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating template [.monitoring-alerts-7] with version [7]" } {"type": "server", "timestamp": "2025-01-16T11:49:56,971Z", "level": "INFO", "component": "o.e.x.m.Monitoring", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating template [.monitoring-es] with version [7]" } {"type": "server", "timestamp": "2025-01-16T11:49:56,972Z", "level": "INFO", "component": "o.e.x.m.Monitoring", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating template [.monitoring-kibana] with version [7]" } {"type": "server", "timestamp": "2025-01-16T11:49:56,974Z", "level": "INFO", "component": "o.e.x.m.Monitoring", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating template [.monitoring-logstash] with version [7]" } {"type": "server", "timestamp": "2025-01-16T11:49:56,978Z", "level": "INFO", "component": "o.e.x.m.Monitoring", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "creating template [.monitoring-beats] with version [7]" } {"type": "server", "timestamp": "2025-01-16T11:49:57,096Z", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks" } ERROR: [1] bootstrap checks failed. You must address the points described in the following [1] lines before starting Elasticsearch. bootstrap check failure [1] of [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log {"type": "server", "timestamp": "2025-01-16T11:49:57,110Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "stopping ..." } {"type": "server", "timestamp": "2025-01-16T11:49:57,126Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "stopped" } {"type": "server", "timestamp": "2025-01-16T11:49:57,127Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "closing ..." } {"type": "server", "timestamp": "2025-01-16T11:49:57,137Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "closed" } {"type": "server", "timestamp": "2025-01-16T11:49:57,139Z", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "docker-cluster", "node.name": "5d47e35e1c3c", "message": "Native controller process has stopped - no new native processes can be started" } OCI runtime exec failed: exec failed: unable to start container process: exec: "bin/kibana-verification-code": stat bin/kibana-verification-code: no such file or directory: unknown All done
The text was updated successfully, but these errors were encountered:
No branches or pull requests
The container with elasticsearch does not start. What I do:
Next I attach the full output:
The text was updated successfully, but these errors were encountered: