Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

License issue #10

Open
vkjuju opened this issue Oct 2, 2017 · 62 comments
Open

License issue #10

vkjuju opened this issue Oct 2, 2017 · 62 comments

Comments

@vkjuju
Copy link

vkjuju commented Oct 2, 2017

Hi, When we ran gem build logstash-output-cassandra.gemspec, there's a license issue as follows, any advice would be appreciated.

root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# gem build logstash-output-cassandra.gemspec
fatal: Not a git repository (or any of the parent directories): .git
WARNING: WARNING: license value 'Apache License (2.0)' is invalid. Use a license identifier from
http://spdx.org/licenses or 'Nonstandard' for a nonstandard license.
WARNING: open-ended dependency on cassandra-driver (>= 0) is not recommended
if cassandra-driver is semantically versioned, use:
add_runtime_dependency 'cassandra-driver', '> 0'
WARNING: open-ended dependency on logstash-devutils (>= 0, development) is not recommended
if logstash-devutils is semantically versioned, use:
add_development_dependency 'logstash-devutils', '
> 0'
WARNING: See http://guides.rubygems.org/specification-reference/ for help
Successfully built RubyGem
Name: logstash-output-cassandra
Version: 0.1.1
File: logstash-output-cassandra-0.1.1.gem

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

it worked finally,
but when we ran it with bin/logstash -e 'output {cassandra {}}',there's another error which we don't know how to fix it :
root@199mysqlmove:/opt/logstash/bin# ./logstash -e 'output {cassandra {}}'
The error reported is:
Couldn't find any output plugin named 'cassandra'. Are you sure this is correct? Trying to load the cassandra output plugin resulted in this error: no such file to load -- logstash/outputs/cassandra

a similar case with solution as follows:
https://discuss.elastic.co/t/custom-plugin-installed-fails-to-start-logstash/38505

@valentin-fischer
Copy link

Hi.

This is because the output is not registered within logstash.
The idea is that you have to install the gem after you build it.

I'll have to search how I did it in the past when I was building this output...

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

@valentinul , we did install it successfully as follows:
root@199mysqlmove:/opt/logstash/bin# ./plugin install /home/mysqlmove/download/logstash-output-cassandra-master/logstash-output-cassandra-0.1.1.gem
Validating /home/mysqlmove/download/logstash-output-cassandra-master/logstash-output-cassandra-0.1.1.gem
Installing logstash-output-cassandra
Installation successful

how come the last step fails ? logstash -e 'output {cassandra {}}'

@valentin-fischer
Copy link

Ok....that looks good. what logstash version are you using ?

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

@valentinul , it's logstash_2.2.2-1_all.deb

@valentin-fischer
Copy link

valentin-fischer commented Oct 3, 2017

Ok... I remember having this kind of issue ....
I believe I ended up doing the install something like this...

If you're installing a local gem file, put the path to the file in GEM_PATH.

Edit /opt/logstash/Gemfile to include the line:

gem "logstash-output-cassandra", "0.1.1"

Install

env GEM_HOME=/opt/logstash/vendor/bundle/jruby/1.9 /opt/logstash/vendor/jruby/bin/jruby /opt/logstash/vendor/jruby/bin/gem install logstash-output-cassandra -v 0.1.1

Try it

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

env GEM_HOME=/opt/logstash/vendor/bundle/jruby/1.9 /opt/logstash/vendor/jruby/bin/jruby /opt/logstash/vendor/jruby/bin/gem install logstash-output-cassandra -v 0.1.1
Fetching: logstash-core-1.5.6-java.gem (100%)
Successfully installed logstash-core-1.5.6-java
Fetching: ione-1.2.4.gem (100%)
Successfully installed ione-1.2.4
Fetching: cassandra-driver-3.2.0-java.gem (100%)
Successfully installed cassandra-driver-3.2.0-java
Successfully installed logstash-output-cassandra-0.1.1
4 gems installed

but the last step still failed:
root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# /opt/logstash/bin/logstash -e 'output {cassandra {}}'
The error reported is:
Couldn't find any output plugin named 'cassandra'. Are you sure this is correct? Trying to load the cassandra output plugin resulted in this error: no such file to load -- logstash/outputs/cassandra

@valentin-fischer
Copy link

Hmmm,

It seems that the file is missing. Is the cassandra.rb file in there ?

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

yes, it's there :
root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# find . -name *.rb
./spec/outputs/cassandra_spec.rb
./lib/logstash/outputs/cassandra.rb

@valentin-fischer
Copy link

Do a

env GEM_HOME=/opt/logstash/vendor/bundle/jruby/1.9 /opt/logstash/vendor/jruby/bin/jruby /opt/logstash/vendor/jruby/bin/gem list

Do you see the cassandra output in there ?

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

I think it's there:
root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# env GEM_HOME=/opt/logstash/vendor/bundle/jruby/1.9 /opt/logstash/vendor/jruby/bin/jruby /opt/logstash/vendor/jruby/bin/gem list

*** LOCAL GEMS ***

addressable (2.3.8)
arr-pm (0.0.10)
atomic (1.1.99 java)
avl_tree (1.2.1)
awesome_print (1.6.1)
aws-sdk (2.1.36)
aws-sdk-core (2.1.36)
aws-sdk-resources (2.1.36)
aws-sdk-v1 (1.66.0)
backports (3.6.8)
bindata (2.2.0)
buftok (0.2.0)
bundler (1.9.10)
cabin (0.7.2)
cassandra-driver (3.2.0 java)
childprocess (0.5.9)
cinch (2.3.1)
clamp (0.6.5)
coderay (1.1.0)
concurrent-ruby (0.9.2 java)
domain_name (0.5.20160128)
edn (1.1.0)
elasticsearch (1.0.15)
elasticsearch-api (1.0.15)
elasticsearch-transport (1.0.15)
equalizer (0.0.10)
faraday (0.9.2)
ffi (1.9.10 java)
ffi-rzmq (2.0.4)
ffi-rzmq-core (1.0.4)
file-dependencies (0.1.6)
filesize (0.0.4)
filewatch (0.8.0)
fpm (1.3.3)
gelf (1.3.2)
gelfd (0.2.0)
gems (0.8.3)
geoip (1.6.1)
gmetric (0.1.3)
hipchat (1.5.2)
hitimes (1.2.3 java)
http (0.9.8)
http-cookie (1.0.2)
http-form_data (1.0.1)
http_parser.rb (0.6.0 java)
httparty (0.13.7)
i18n (0.6.9)
ione (1.2.4)
jar-dependencies (0.3.2, 0.2.6)
jls-grok (0.11.2)
jls-lumberjack (0.0.26)
jmespath (1.1.3)
jrjackson (0.3.8)
jruby-kafka (1.5.0 java)
jruby-openssl (0.9.13 java, 0.9.11 java)
json (1.8.3 java, 1.8.0 java)
logstash-codec-collectd (2.0.2)
logstash-codec-dots (2.0.2)
logstash-codec-edn (2.0.2)
logstash-codec-edn_lines (2.0.2)
logstash-codec-es_bulk (2.0.2)
logstash-codec-fluent (2.0.2 java)
logstash-codec-graphite (2.0.2)
logstash-codec-json (2.1.0)
logstash-codec-json_lines (2.1.1)
logstash-codec-line (2.1.0)
logstash-codec-msgpack (2.0.2 java)
logstash-codec-multiline (2.0.9)
logstash-codec-netflow (2.0.3)
logstash-codec-oldlogstashjson (2.0.2)
logstash-codec-plain (2.0.2)
logstash-codec-rubydebug (2.0.5)
logstash-core (2.2.2 java, 1.5.6 java)
logstash-core-event (2.2.2 java)
logstash-filter-anonymize (2.0.2)
logstash-filter-checksum (2.0.2)
logstash-filter-clone (2.0.4)
logstash-filter-csv (2.1.1)
logstash-filter-date (2.1.2)
logstash-filter-dns (2.0.2)
logstash-filter-drop (2.0.2)
logstash-filter-fingerprint (2.0.3)
logstash-filter-geoip (2.0.5)
logstash-filter-grok (2.0.3)
logstash-filter-json (2.0.3)
logstash-filter-kv (2.0.4)
logstash-filter-metrics (3.0.0)
logstash-filter-multiline (2.0.3)
logstash-filter-mutate (2.0.3)
logstash-filter-ruby (2.0.3)
logstash-filter-sleep (2.0.2)
logstash-filter-split (2.0.2)
logstash-filter-syslog_pri (2.0.2)
logstash-filter-throttle (2.0.2)
logstash-filter-urldecode (2.0.2)
logstash-filter-useragent (2.0.4)
logstash-filter-uuid (2.0.3)
logstash-filter-xml (2.1.1)
logstash-input-beats (2.1.3)
logstash-input-couchdb_changes (2.0.2)
logstash-input-elasticsearch (2.0.3)
logstash-input-eventlog (3.0.1)
logstash-input-exec (2.0.4)
logstash-input-file (2.2.1)
logstash-input-ganglia (2.0.4)
logstash-input-gelf (2.0.2)
logstash-input-generator (2.0.2)
logstash-input-graphite (2.0.5)
logstash-input-heartbeat (2.0.2)
logstash-input-http (2.2.0)
logstash-input-http_poller (2.0.3)
logstash-input-imap (2.0.3)
logstash-input-irc (2.0.3)
logstash-input-jdbc (3.0.0)
logstash-input-kafka (2.0.4)
logstash-input-log4j (2.0.5 java)
logstash-input-lumberjack (2.0.5)
logstash-input-pipe (2.0.2)
logstash-input-rabbitmq (3.1.4)
logstash-input-redis (2.0.2)
logstash-input-s3 (2.0.4)
logstash-input-snmptrap (2.0.2)
logstash-input-sqs (2.0.3)
logstash-input-stdin (2.0.2)
logstash-input-syslog (2.0.2)
logstash-input-tcp (3.0.2)
logstash-input-twitter (2.2.0)
logstash-input-udp (2.0.3)
logstash-input-unix (2.0.4)
logstash-input-xmpp (2.0.3)
logstash-input-zeromq (2.0.2)
logstash-mixin-aws (2.0.2)
logstash-mixin-http_client (2.2.1)
logstash-mixin-rabbitmq_connection (2.3.0 java)
logstash-output-cassandra (0.1.1)
logstash-output-cloudwatch (2.0.2)
logstash-output-csv (2.0.3)
logstash-output-elasticsearch (2.5.1 java)
logstash-output-email (3.0.2)
logstash-output-exec (2.0.2)
logstash-output-file (2.2.3)
logstash-output-ganglia (2.0.2)
logstash-output-gelf (2.0.3)
logstash-output-graphite (2.0.3)
logstash-output-hipchat (3.0.2)
logstash-output-http (2.1.1)
logstash-output-irc (2.0.2)
logstash-output-juggernaut (2.0.2)
logstash-output-kafka (2.0.2)
logstash-output-lumberjack (2.0.4)
logstash-output-nagios (2.0.2)
logstash-output-nagios_nsca (2.0.3)
logstash-output-null (2.0.2)
logstash-output-opentsdb (2.0.2)
logstash-output-pagerduty (2.0.2)
logstash-output-pipe (2.0.2)
logstash-output-rabbitmq (3.0.7 java)
logstash-output-redis (2.0.2)
logstash-output-s3 (2.0.4)
logstash-output-sns (3.0.2)
logstash-output-sqs (2.0.2)
logstash-output-statsd (2.0.5)
logstash-output-stdout (2.0.4)
logstash-output-tcp (2.0.2)
logstash-output-udp (2.0.2)
logstash-output-xmpp (2.0.2)
logstash-output-zeromq (2.0.2)
logstash-patterns-core (2.0.2)
lru_redux (1.1.0)
mail (2.6.3)
manticore (0.5.2 java)
march_hare (2.15.0 java)
memoizable (0.4.2)
method_source (0.8.2)
metriks (0.9.9.7)
mime-types (2.99)
mimemagic (0.3.1)
minitar (0.5.4)
msgpack-jruby (1.4.1 java)
multi_json (1.11.2)
multi_xml (0.5.5)
multipart-post (2.0.0)
murmurhash3 (0.1.6 java)
naught (1.1.0)
nokogiri (1.6.7.2 java)
octokit (3.8.0)
polyglot (0.3.5)
pry (0.10.3 java)
puma (2.16.0 java)
rack (1.6.4)
rake (10.1.0)
rdoc (4.1.2)
redis (3.2.2)
ruby-maven (3.3.10)
ruby-maven-libs (3.3.3)
rubyzip (1.1.7)
rufus-scheduler (3.0.9)
sawyer (0.6.0)
sequel (4.31.0)
simple_oauth (0.3.1)
slop (3.6.0)
snmp (1.2.0)
spoon (0.0.4)
statsd-ruby (1.2.0)
stud (0.0.22)
thread_safe (0.3.5 java)
treetop (1.4.15)
twitter (5.15.0)
tzinfo (1.2.2)
tzinfo-data (1.2016.1)
unf (0.1.4 java)
user_agent_parser (2.3.0)
win32-eventlog (0.6.5)
xml-simple (1.1.5)
xmpp4r (0.5)

@valentin-fischer
Copy link

Seems to be in there. I think it's a path issue....
Try to start logstash manually by specifying all the paths... do a normal init start and get the full command line to see the whole line to start.

Add -vvvv and see if anything is fishy...

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

sorry , I don't get it , could your provide command line ? Thanks , or any chance for a teamviewer ?

@valentin-fischer
Copy link

Start logstash using /etc/init.d/logstash start and get/grep the full command line used to start it.

Use that full command and try to start it with it to see the actual PATH used to start it....

Can't help you with TV.

@vkjuju
Copy link
Author

vkjuju commented Oct 3, 2017

not quite understand what you told me, is the following ok?
root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# /etc/init.d/logstash start
logstash started.
root@199mysqlmove:/home/mysqlmove/download/logstash-output-cassandra-master# /opt/logstash/bin/logstash -e 'output {cassandra {}}'
The error reported is:
Couldn't find any output plugin named 'cassandra'. Are you sure this is correct? Trying to load the cassandra output plugin resulted in this error: no such file to load -- logstash/outputs/cassandra

@valentin-fischer
Copy link

Run as root.

/usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -Xmx1g -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/var/lib/logstash -XX:HeapDumpPath=/opt/logstash/heapdump.hprof -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log --debug

@valentin-fischer
Copy link

Also change /etc/logstash/conf.d to a different path if you have the config file in some other place.

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

the error message:

Error: No config files found: /etc/logstash/conf.d/*
Can you make sure this path is a logstash config file?
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose

Could you tell me the config file name ?

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

@valentinul , there's no file beneath /etc/logstash/conf.d , Could you tell me the name of config file?

@valentin-fischer
Copy link

@vkjuju the file has to be called logstash.conf

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

it's weird I don't have logstash.conf in my ubuntu ?

@valentin-fischer
Copy link

valentin-fischer commented Oct 5, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

there's a file:
vi /var/lib/dpkg/info/logstash.conffiles

etc/default/logstash
/etc/init.d/logstash
/etc/logrotate.d/logstash

is it this one ? there's no logstash.conf , I searched it from /

@valentin-fischer
Copy link

valentin-fischer commented Oct 5, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

I created an empty logstash.conf on /etc/logstash/conf.d and run debug:
Error: Expected one of #, input, filter, output at line 2, column 1 (byte 2) after
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.

@valentin-fischer
Copy link

valentin-fischer commented Oct 5, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

sorry , I don't get it on this "put in it the input/filter/output section you need"

@valentin-fischer
Copy link

valentin-fischer commented Oct 5, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

sorry, I got stuck on logstash.conf, I created a empty one underneath /etc/logstash/conf.d and I don't know how to do next ...

@valentin-fischer
Copy link

valentin-fischer commented Oct 5, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 5, 2017

@valentinul , any more advice would be appreciated ^^"

@valentin-fischer
Copy link

Take a look more at gem. The issue is that there are multiple gem/jruby installations on the system and logstash has it's own. So try to find which is the correct path to install the cassandra output. You have to use that gem install in the proper jruby path.

So in conclusion, find all the "gem" binary location and do gem install logstash-output-cassandra -v 0.1.1 until it gets registered into the logstash system/path.

@vkjuju
Copy link
Author

vkjuju commented Oct 11, 2017

Sorry, just came back from national holidays:
root@199mysqlmove:/# find . -name gem
./home/mysqlmove/logstash-5.5.1/vendor/jruby/bin/gem
./opt/logstash/vendor/jruby/bin/gem
./usr/bin/gem

root@199mysqlmove:/opt/logstash/vendor/jruby/bin# ./gem build logstash-output-cassandra.gemspec
/usr/bin/env: jruby: no such file or directory

@valentinul , any advice would be appreciated...

@valentin-fischer
Copy link

valentin-fischer commented Oct 11, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 11, 2017

@valentinul , sorry , I don't get it , is it export JRUTY_PATH= ?

@valentin-fischer
Copy link

env GEM_HOME=/opt/logstash/vendor/jruby

@vkjuju
Copy link
Author

vkjuju commented Oct 11, 2017

the same error after executing env GEM_HOME=/opt/logstash/vendor/jruby:
env GEM_HOME=/opt/logstash/vendor/jruby
TOMCAT_HOME=/usr/local/apache-tomcat-9.0.0.M21
GEM_HOME=/opt/logstash/vendor/jruby
TERM=xterm
SHELL=/bin/bash
DERBY_HOME=/usr/lib/jvm/java-8-oracle/db
SSH_CLIENT=192.168.25.35 6041 22
OLDPWD=/
CATALINA_BASE=/usr/local/apache-tomcat-9.0.0.M21
SSH_TTY=/dev/pts/9
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:.tar=01;31:.tgz=01;31:.arj=01;31:.taz=01;31:.lzh=01;31:.lzma=01;31:.tlz=01;31:.txz=01;31:.zip=01;31:.z=01;31:.Z=01;31:.dz=01;31:.gz=01;31:.lz=01;31:.xz=01;31:.bz2=01;31:.bz=01;31:.tbz=01;31:.tbz2=01;31:.tz=01;31:.deb=01;31:.rpm=01;31:.jar=01;31:.war=01;31:.ear=01;31:.sar=01;31:.rar=01;31:.ace=01;31:.zoo=01;31:.cpio=01;31:.7z=01;31:.rz=01;31:.jpg=01;35:.jpeg=01;35:.gif=01;35:.bmp=01;35:.pbm=01;35:.pgm=01;35:.ppm=01;35:.tga=01;35:.xbm=01;35:.xpm=01;35:.tif=01;35:.tiff=01;35:.png=01;35:.svg=01;35:.svgz=01;35:.mng=01;35:.pcx=01;35:.mov=01;35:.mpg=01;35:.mpeg=01;35:.m2v=01;35:.mkv=01;35:.webm=01;35:.ogm=01;35:.mp4=01;35:.m4v=01;35:.mp4v=01;35:.vob=01;35:.qt=01;35:.nuv=01;35:.wmv=01;35:.asf=01;35:.rm=01;35:.rmvb=01;35:.flc=01;35:.avi=01;35:.fli=01;35:.flv=01;35:.gl=01;35:.dl=01;35:.xcf=01;35:.xwd=01;35:.yuv=01;35:.cgm=01;35:.emf=01;35:.axv=01;35:.anx=01;35:.ogv=01;35:.ogx=01;35:.aac=00;36:.au=00;36:.flac=00;36:.mid=00;36:.midi=00;36:.mka=00;36:.mp3=00;36:.mpc=00;36:.ogg=00;36:.ra=00;36:.wav=00;36:.axa=00;36:.oga=00;36:.spx=00;36:.xspf=00;36:
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/usr/local/solr-6.6.0/bin:/usr/local/apache-tomcat-9.0.0.M21/bin:/usr/local/solr-6.5.1/server/scripts/cloud-scripts
PWD=/opt/logstash/vendor/jruby/bin
JAVA_HOME=/usr/lib/jvm/java-8-oracle
LANG=zh_TW.UTF-8
SHLVL=1
HOME=/root
LANGUAGE=zh_TW:zh
LOGNAME=root
J2SDKDIR=/usr/lib/jvm/java-8-oracle
SSH_CONNECTION=192.168.25.35 6041 192.168.112.199 22
LESSOPEN=| /usr/bin/lesspipe %s
DISPLAY=localhost:18.0
J2REDIR=/usr/lib/jvm/java-8-oracle/jre
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env

@vkjuju
Copy link
Author

vkjuju commented Oct 11, 2017

@valentinul , is it ok if you have a chance to ssh our ubuntu server? we have been getting stuck on this issue for quite some time...

@valentin-fischer
Copy link

valentin-fischer commented Oct 11, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 11, 2017

Ok , I have sent some info to you , btw: my skype: joesonga at hotmail.com, Thanks

@vkjuju
Copy link
Author

vkjuju commented Oct 12, 2017

@valentinul , there's still some errors as follows:
root@199mysqlmove:/opt/logstash/bin# ./logstash -e 'output {cassandra {}}'
plugin is using the 'milestone' method to declare the version of the plugin this method is deprecated in favor of declaring the version inside the gemspec. {:level=>:warn}
Missing a required setting for the cassandra output plugin:

output {
cassandra {
hosts => # SETTING MISSING
...
}
} {:level=>:error}
Missing a required setting for the cassandra output plugin:

output {
cassandra {
keyspace => # SETTING MISSING
...
}
} {:level=>:error}
Missing a required setting for the cassandra output plugin:

output {
cassandra {
table => # SETTING MISSING
...
}
} {:level=>:error}
Missing a required setting for the cassandra output plugin:

output {
cassandra {
username => # SETTING MISSING
...
}
} {:level=>:error}
Missing a required setting for the cassandra output plugin:

output {
cassandra {
password => # SETTING MISSING
...
}
} {:level=>:error}
Error: Something is wrong with your configuration.
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.

@valentin-fischer
Copy link

Yes. You have to fill/use all the mandatory settings for the cassandra output

@vkjuju
Copy link
Author

vkjuju commented Oct 12, 2017

sorry , I don't know how to fix the above error , could you write down your solution once you get a chance?

@vkjuju
Copy link
Author

vkjuju commented Oct 12, 2017 via email

@valentin-fischer
Copy link

Nop, you have to dig the problem out. You have to make the actual configuration for the cassandra output.

An example is the following.

output {
    cassandra {
        # List of Cassandra hostname(s) or IP-address(es)
        hosts => [ "cass-01", "cass-02" ]

        # The port cassandra is listening to
        port => 9042

        # The protocol version to use with cassandra
        protocol_version => 4

        # Cassandra consistency level.
        # Options: "any", "one", "two", "three", "quorum", "all", "local_quorum", "each_quorum", "serial", "local_serial", "local_one"
        # Default: "one"
        consistency => 'any'

        # The keyspace to use
        keyspace => "a_ks"

        # The table to use (event level processing (e.g. %{[key]}) is supported)
        table => "%{[@metadata][cassandra_table]}"

        # Username
        username => "cassandra"

        # Password
        password => "cassandra"

        # An optional hints hash which will be used in case filter_transform or filter_transform_event_key are not in use
        # It is used to trigger a forced type casting to the cassandra driver types in
        # the form of a hash from column name to type name in the following manner:
        hints => {
            id => "int"
            at => "timestamp"
            resellerId => "int"
            errno => "int"
            duration => "float"
            ip => "inet"
        }

        # The retry policy to use (the default is the default retry policy)
        # the hash requires the name of the policy and the params it requires
        # The available policy names are:
        # * default => retry once if needed / possible
        # * downgrading_consistency => retry once with a best guess lowered consistency
        # * failthrough => fail immediately (i.e. no retries)
        # * backoff => a version of the default retry policy but with configurable backoff retries
        # The backoff options are as follows:
        # * backoff_type => either * or ** for linear and exponential backoffs respectively
        # * backoff_size => the left operand for the backoff type in seconds
        # * retry_limit => the maximum amount of retries to allow per query
        # example:
        # using { "type" => "backoff" "backoff_type" => "**" "backoff_size" => 2 "retry_limit" => 10 } will perform 10 retries with the following wait times: 1, 2, 4, 8, 16, ... 1024
        # NOTE: there is an underlying assumption that the insert query is idempotent !!!
        # NOTE: when the backoff retry policy is used, it will also be used to handle pure client timeouts and not just ones coming from the coordinator
        retry_policy => { "type" => "default" }

        # The command execution timeout
        request_timeout => 1

        # Ignore bad values
        ignore_bad_values => false

        # In Logstashes >= 2.2 this setting defines the maximum sized bulk request Logstash will make
        # You you may want to increase this to be in line with your pipeline's batch size.
        # If you specify a number larger than the batch size of your pipeline it will have no effect,
        # save for the case where a filter increases the size of an inflight batch by outputting
        # events.
        #
        # In Logstashes <= 2.1 this plugin uses its own internal buffer of events.
        # This config option sets that size. In these older logstashes this size may
        # have a significant impact on heap usage, whereas in 2.2+ it will never increase it.
        # To make efficient bulk API calls, we will buffer a certain number of
        # events before flushing that out to Cassandra. This setting
        # controls how many events will be buffered before sending a batch
        # of events. Increasing the `flush_size` has an effect on Logstash's heap size.
        # Remember to also increase the heap size using `LS_HEAP_SIZE` if you are sending big commands
        # or have increased the `flush_size` to a higher value.
        flush_size => 500

        # The amount of time since last flush before a flush is forced.
        #
        # This setting helps ensure slow event rates don't get stuck in Logstash.
        # For example, if your `flush_size` is 100, and you have received 10 events,
        # and it has been more than `idle_flush_time` seconds since the last flush,
        # Logstash will flush those 10 events automatically.
        #
        # This helps keep both fast and slow log streams moving along in
        # near-real-time.
        idle_flush_time => 1
    }
}

@vkjuju
Copy link
Author

vkjuju commented Oct 12, 2017 via email

@valentin-fischer
Copy link

It seems that you don't understand the actual issue. You don't have a valid logstash configuration yet.

When you run logstash -e, thats equal with running logstash with a configuration taken from from the command line.

So, in other words, YOU have to write a valid logstash.conf

@vkjuju
Copy link
Author

vkjuju commented Oct 12, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 16, 2017

@valentinul , we used to collect iis log and send output to elasticsearch as follows, we'd like to know if we just replace the output to your example? thanks
################################################################
input {
file {
type => "iis-w3c"
path => "C:\inetpub\logs\LogFiles*"
}

}

filter {

if [message] =~ "^#" {
drop {}
}

grok {
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
}

date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}

if [bytesSent] {
ruby {
code => "event['kilobytesSent'] = event['bytesSent'].to_i / 1024.0"
}
}

if [bytesReceived] {
ruby {
code => "event['kilobytesReceived'] = event['bytesReceived'].to_i / 1024.0"
}
}

mutate {
convert => ["bytesSent", "integer"]
convert => ["bytesReceived", "integer"]
convert => ["timetaken", "integer"]

add_field => { "clientHostname" => "%{clientIP}" }

remove_field => [ "log_timestamp"]

}
dns {
action => "replace"
reverse => ["clientHostname"]
}
useragent {
source=> "useragent"
prefix=> "browser"
}

}
output {
elasticsearch {
hosts => [ "192.168.112.199:9200" ]
#protocol => "http"
index => "%{type}-%{+YYYY.MM}"
}
}

@valentin-fischer
Copy link

Hi,

No..it's not that simple...you have to create the tables in cassandra first and make sure you have the correct structure so it's matching your message structure and so on ...

You also have to clean/strip the message from extra stuff that it matches EXACTLY your structure from your cassandra table.

Unfortunately you have to work on the config and debug it until you are able to make it work.

@vkjuju
Copy link
Author

vkjuju commented Oct 16, 2017

yes, we are creating table and columns on cassandra based on grok:
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"

@valentin-fischer
Copy link

valentin-fischer commented Oct 16, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 17, 2017

@valentinul , we hit some errors as follows, any advice would be appreciated, welcome to connect to our server once you get a chance...
root@199mysqlmove:/opt/logstash/bin# ./logstash -f ../iis.conf
plugin is using the 'milestone' method to declare the version of the plugin this method is deprecated in favor of declaring the version inside the gemspec. {:level=>:warn}
Unknown setting 'source' for cassandra {:level=>:error}
Unknown setting 'ignore_bad_messages' for cassandra {:level=>:error}
Unknown setting 'batch_size' for cassandra {:level=>:error}
Unknown setting 'batch_processor_thread_period' for cassandra {:level=>:error}
Unknown setting 'max_retries' for cassandra {:level=>:error}
Unknown setting 'retry_delay' for cassandra {:level=>:error}
Error: Something is wrong with your configuration.
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.

@valentin-fischer
Copy link

seems like you are trying to set things/options that are not supported by the cassandra output.
Try to exclude them and see if it works.

@vkjuju
Copy link
Author

vkjuju commented Oct 18, 2017

@valentinul , we think something wrong with "hints" ? we're not sure how to define those "hints" ? any advice would be appreciated...
our iis log looks like this:
2017-09-01 01:32:04 W3SVC1 155sqlsrv01 192.168.112.155 GET / - 80 - 192.168.25.80 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Firefox/52.0 - 304 0 0 262
2017-09-01 01:32:04 W3SVC1 155sqlsrv01 192.168.112.155 GET /iis-85.png - 80 - 192.168.25.80 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Firefox/52.0 http://192.168.112.155/ 304 0 0 135
2017-09-01 01:32:05 W3SVC1 155sqlsrv01 192.168.112.155 GET /favicon.ico - 80 - 192.168.25.80 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Firefox/52.0 - 404 0 2 205
2017-09-01 01:32:05 W3SVC1 155sqlsrv01 192.168.112.155 GET /favicon.ico - 80 - 192.168.25.80 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Firefox/52.0 - 404 0 2 248
2017-09-01 01:32:14 W3SVC1 155sqlsrv01 192.168.112.155 GET / - 80 - 192.168.25.80 Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:52.0)+Gecko/20100101+Firefox/52.0 - 304 0 0 32

logstash config file:
input {
file {
type => "iis-w3c"
path => "/home/mysqlmove/logstash-5.5.1/iis_logs/*"
}
}
filter {
if [message] =~ "^#" {
drop {}
}
grok {
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
}
date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}
if [bytesSent] {
ruby {
code => "event['kilobytesSent'] = event['bytesSent'].to_i / 1024.0"
}
}
if [bytesReceived] {
ruby {
code => "event['kilobytesReceived'] = event['bytesReceived'].to_i / 1024.0"
}
}
mutate {

convert => ["bytesSent", "integer"]
convert => ["bytesReceived", "integer"]
convert => ["timetaken", "integer"]

add_field => { "clientHostname" => "%{clientIP}" }

remove_field => [ "log_timestamp"]

}
dns {
action => "replace"
reverse => ["clientHostname"]
}

useragent {
    source=> "useragent"
    prefix=> "browser"
}

}
output {
cassandra {

    username => "cassandra"
    password => "cassandra"
    hosts => ["192.168.112.171"]
    keyspace => "mykeyspace"
    table => "query_log"
  
    consistency => "all"
    
    source => "payload"
    
    hints => {
        id => "int"
        at => "timestamp"
        resellerId => "int"
        errno => "int"
        duration => "float"
        ip => "inet"}
        
            ignore_bad_messages => true
    
         ignore_bad_values => true      
 
    batch_size => 100       
   
    batch_processor_thread_period => 1     
    max_retries => 3        
    retry_delay => 3
}

}

@valentin-fischer
Copy link

valentin-fischer commented Oct 18, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 24, 2017

@valentinul , logstash -f ../iis.conf works, but when we add some iis logs, we hit the following error message , any advice would be appreciated...

root@199mysqlmove:/opt/logstash/bin# ./logstash -f ../iis.conf
root@199mysqlmove:/opt/logstash/bin# ./logstash -f ../iis.conf
plugin is using the 'milestone' method to declare the version of the plugin this method is deprecated in favor of declaring the version inside the gemspec. {:level=>:warn}
Settings: Default pipeline workers: 2
Logstash startup completed

Failed to prepare query {:action=>{"table"=>"query_log", "data"=>{"message"=>"2017-09-05 09:45:31 W3SVC1 155sqlsrv01 192.168.112.155 GET / - 80 - 192.168.25.80 curl/7.55.1 - 200 0 0 59\r", "path"=>"/opt/logstash/iis_logs/u_ex170905xxx01.log", "host"=>"199mysqlmove", "type"=>" iis-w3c", "tags"=>["_grokparsefailure"], "clientHostname"=>"%{clientIP}"}}, :exception=>#<Cassandra::Errors::InvalidError: Undefined colum n name message>, :backtrace=>[], :level=>:error}
Failed to prepare query {:action=>{"table"=>"query_log", "data"=>{"message"=>"2017-09-05 09:46:30 W3SVC1 155sqlsrv01 192.168.112.155 GET / - 80 - 192.168.25.80 curl/7.55.1 - 200 0 0 5\r", "path"=>"/opt/logstash/iis_logs/u_ex170905xxx01.log", "host"=>"199mysqlmove", "type"=>"i is-w3c", "tags"=>["_grokparsefailure"], "clientHostname"=>"%{clientIP}"}}, :exception=>#<Cassandra::Errors::InvalidError: Undefined column name message>, :backtrace=>[], :level=>:error}

@valentin-fischer
Copy link

valentin-fischer commented Oct 25, 2017 via email

@vkjuju
Copy link
Author

vkjuju commented Oct 26, 2017

Thanks @valentinul ,
we got stuck on how to match them 1:1, could you verify it for us?
iis log:
2017-09-07 09:44:02 W3SVC1 155sqlsrv01 192.168.112.155 GET /iis-85.png - 80 - 192.168.20.32 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/60.0.3112.113+Safari/537.36 http://192.168.112.155/ 200 0 0 599

logstash.conf :
grok {
## Very helpful site for building these statements:
# http://grokdebug.herokuapp.com/
#
# This is configured to parse out every field of IIS's W3C format when
# every field is included in the logs
#
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]

we are not sure how to create table based on the above iis log, logstash.conf and your output(hints), any chance to connect to our server to take a look? it's the last one mile from the begining ...

Thanks
Joe

@valentin-fischer
Copy link

valentin-fischer commented Oct 26, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants