From 1d7934900f7b27e5009b826a7f7638c3c9ad4ba7 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 2 Jan 2015 15:34:30 +0000 Subject: [PATCH 01/75] Always forget make doc for Change log TOC --- docs/ChangeLog.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 1c57497a..2b97a428 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -4,7 +4,7 @@ **Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* -- [?.?](#) +- [1.3](#13) - [1.2](#12) - [1.1](#11) - [1.0](#10) From 29b7957a33a2cbd5868e21a78014fb0e42c659f1 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 9 Jan 2015 23:07:22 +0000 Subject: [PATCH 02/75] Add -stdin command line argument and stdin configuration section, and remove special "-" file path that previously read from stdin. Log Courier will now read ONLY from stdin when -stdin is specified, using the configuration in the stdin section, and will exit automatically on EOF (Implements elasticsearch/logstash-forwarder#251 and elasticsearch/logstash-forwarder#343) --- docs/CommandLineArguments.md | 7 ++ docs/Configuration.md | 130 ++++++++++++++++------------ spec/courier_spec.rb | 42 ++++----- spec/lib/helpers/log-courier.rb | 17 ++++ src/lc-lib/core/config.go | 66 ++++++++++---- src/lc-lib/harvester/harvester.go | 56 ++++++------ src/lc-lib/prospector/info.go | 6 -- src/lc-lib/prospector/prospector.go | 34 +------- src/lc-lib/publisher/publisher.go | 33 ++++++- src/log-courier/log-courier.go | 64 +++++++++++--- 10 files changed, 274 insertions(+), 181 deletions(-) diff --git a/docs/CommandLineArguments.md b/docs/CommandLineArguments.md index 3283e7da..49816a81 100644 --- a/docs/CommandLineArguments.md +++ b/docs/CommandLineArguments.md @@ -10,6 +10,7 @@ - [`-cpuprofile=`](#-cpuprofile=path) - [`-from-beginning`](#-from-beginning) - [`-list-supported`](#-list-supported) +- [`-stdin`](#-stdin) - [`-version`](#-version) @@ -60,6 +61,12 @@ discovered log files will start from the begining, regardless of this flag. Print a list of available transports and codecs provided by this build of Log Courier, then exit. +## `-stdin` + +Read log data from stdin and ignore files declaractions in the configuration +file. The fields and codec can be configured in the configuration file under +the `"stdin"` section. + ## `-version` Print the version of this build of Log Courier, then exit. diff --git a/docs/Configuration.md b/docs/Configuration.md index 000cf660..366e2f5e 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -11,6 +11,10 @@ - [String, Number, Boolean, Array, Dictionary](#string-number-boolean-array-dictionary) - [Duration](#duration) - [Fileglob](#fileglob) +- [Stream Configuration](#stream-configuration) + - [`"codec"`](#codec) + - [`"dead time"`](#dead-time) + - [`"fields"`](#fields) - [`"general"`](#general) - [`"admin enabled"`](#admin-enabled) - [`"admin listen address"`](#admin-listen-address) @@ -39,11 +43,9 @@ - [`"timeout"`](#timeout) - [`"transport"`](#transport) - [`"files"`](#files) - - [`"codec"`](#codec) - - [`"dead time"`](#dead-time) - - [`"fields"`](#fields) - [`"paths"`](#paths) - [`"includes"`](#includes) +- [`"stdin"`](#stdin) @@ -148,6 +150,64 @@ character-range: * `"/var/log/httpd/access.log"` * `"/var/log/httpd/access.log.[0-9]"` +## Stream Configuration + +Stream Configuration parameters can be specified for file groups within +[`"files"`](#files) and also for [`"stdin"`](#stdin). They customise the log +entries produced by passing, for example, by passing them through a codec and +adding extra fields. + +### `"codec"` + +*Codec configuration. Optional. Default: `{ "name": "plain" }`* +*Configuration reload will only affect new or resumed files* + +*Depending on how log-courier was built, some codecs may not be available. Run +`log-courier -list-supported` to see the list of codecs available in a specific +build of log-courier.* + +The specified codec will receive the lines read from the log stream and perform +any decoding necessary to generate events. The plain codec does nothing and +simply ships the events unchanged. + +All configurations are a dictionary with at least a "name" key. Additional +options can be provided if the specified codec allows. + +{ "name": "codec-name" } +{ "name": "codec-name", "option1": "value", "option2": "42" } + +Aside from "plain", the following codecs are available at this time. + +* [Filter](codecs/Filter.md) +* [Multiline](codecs/Multiline.md) + +### `"dead time"` + +*Duration. Optional. Default: "24h"* +*Configuration reload will only affect new or resumed files* + +If a log file has not been modified in this time period, it will be closed and +Log Courier will simply watch it for modifications. If the file is modified it +will be reopened. + +If a log file that is being harvested is deleted, it will remain on disk until +Log Courier closes it. Therefore it is important to keep this value sensible to +ensure old log files are not kept open preventing deletion. + +### `"fields"` + +*Dictionary. Optional* +*Configuration reload will only affect new or resumed files* + +Extra fields to attach the event prior to shipping. These can be simple strings, +numbers or even arrays and dictionaries. + +Examples: + +* `{ "type": "syslog" }` +* `{ "type": "apache", "server_names": [ "example.com", "www.example.com" ] }` +* `{ "type": "program", "program": { "exec": "program.py", "args": [ "--run", "--daemon" ] } }` + ## `"general"` The general configuration affects the general behaviour of Log Courier, such @@ -418,9 +478,8 @@ option. ## `"files"` -The file configuration lists the file groups that contain the logs you wish to -ship. It is an array of file group configurations. A minimum of one file group -configuration must be specified. +The files configuration lists the file groups that contain the logs you wish to +ship. It is an array of file group configurations. ``` [ @@ -433,56 +492,8 @@ configuration must be specified. ] ``` -### `"codec"` - -*Codec configuration. Optional. Default: `{ "name": "plain" }`* -*Configuration reload will only affect new or resumed files* - -*Depending on how log-courier was built, some codecs may not be available. Run -`log-courier -list-supported` to see the list of codecs available in a specific -build of log-courier.* - -The specified codec will receive the lines read from the log stream and perform -any decoding necessary to generate events. The plain codec does nothing and -simply ships the events unchanged. - -All configurations are a dictionary with at least a "name" key. Additional -options can be provided if the specified codec allows. - - { "name": "codec-name" } - { "name": "codec-name", "option1": "value", "option2": "42" } - -Aside from "plain", the following codecs are available at this time. - -* [Filter](codecs/Filter.md) -* [Multiline](codecs/Multiline.md) - -### `"dead time"` - -*Duration. Optional. Default: "24h"* -*Configuration reload will only affect new or resumed files* - -If a log file has not been modified in this time period, it will be closed and -Log Courier will simply watch it for modifications. If the file is modified it -will be reopened. - -If a log file that is being harvested is deleted, it will remain on disk until -Log Courier closes it. Therefore it is important to keep this value sensible to -ensure old log files are not kept open preventing deletion. - -### `"fields"` - -*Dictionary. Optional* -*Configuration reload will only affect new or resumed files* - -Extra fields to attach the event prior to shipping. These can be simple strings, -numbers or even arrays and dictionaries. - -Examples: - -* `{ "type": "syslog" }` -* `{ "type": "apache", "server_names": [ "example.com", "www.example.com" ] }` -* `{ "type": "program", "program": { "exec": "program.py", "args": [ "--run", "--daemon" ] } }` +In addition to the configuration parameters specified below, each file group may +also have [Stream Configuration](#streamconfiguration) parameters specified. ### `"paths"` @@ -515,3 +526,10 @@ following. "paths": [ "/var/log/httpd/access.log" ], "fields": [ "type": "access_log" ] } ] + +## `"stdin"` + +The stdin configuration contains the +[Stream Configuration](#streamconfiguration) parameters that should be used when +Log Courier is set to read log data from stdin using the +[`-stdin`](CommandLineArguments.md#stdin) command line entry. diff --git a/spec/courier_spec.rb b/spec/courier_spec.rb index 12ebace5..50e497df 100644 --- a/spec/courier_spec.rb +++ b/spec/courier_spec.rb @@ -29,11 +29,9 @@ "ssl ca": "#{@ssl_cert.path}", "servers": [ "localhost:#{server_port}" ] }, - "files": [ - { - "paths": [ "-" ] - } - ] + "stdin": { + "fields": { "type": "stdin" } + } } config @@ -49,8 +47,11 @@ expect(e['message']).to eq "stdin line test #{i}" expect(e['host']).to eq host expect(e['path']).to eq '-' + expect(e['type']).to eq 'stdin' i += 1 end + + stdin_shutdown end it 'should split lines that are too long' do @@ -59,12 +60,7 @@ "network": { "ssl ca": "#{@ssl_cert.path}", "servers": [ "127.0.0.1:#{server_port}" ] - }, - "files": [ - { - "paths": [ "-" ] - } - ] + } } config @@ -90,6 +86,8 @@ expect(e['path']).to eq '-' i += 1 end + + stdin_shutdown end it 'should follow a file from the end' do @@ -496,12 +494,9 @@ "ssl ca": "#{@ssl_cert.path}", "servers": [ "127.0.0.1:#{server_port}" ] }, - "files": [ - { - "paths": [ "-" ], - "fields": { "array": [ 1, 2 ] } - } - ] + "stdin": { + "fields": { "array": [ 1, 2 ] } + } } config @@ -521,6 +516,8 @@ expect(e['path']).to eq '-' i += 1 end + + stdin_shutdown end it 'should allow dictionaries inside field configuration' do @@ -530,12 +527,9 @@ "ssl ca": "#{@ssl_cert.path}", "servers": [ "127.0.0.1:#{server_port}" ] }, - "files": [ - { - "paths": [ "-" ], - "fields": { "dict": { "first": "first", "second": 5 } } - } - ] + "stdin": { + "fields": { "dict": { "first": "first", "second": 5 } } + } } config @@ -555,6 +549,8 @@ expect(e['path']).to eq '-' i += 1 end + + stdin_shutdown end it 'should accept globs of configuration files to include' do diff --git a/spec/lib/helpers/log-courier.rb b/spec/lib/helpers/log-courier.rb index 12a4f089..0cd88b83 100644 --- a/spec/lib/helpers/log-courier.rb +++ b/spec/lib/helpers/log-courier.rb @@ -74,6 +74,8 @@ def startup(args = {}) _write_config config if args[:stdin] + args[:args] += ' ' if args[:args] + args[:args] += '-stdin=true' @log_courier_mode = 'r+' else @log_courier_mode = 'r' @@ -113,6 +115,21 @@ def _write_config(config) @config.close end + def stdin_shutdown + return unless @log_courier_mode == 'r+' + begin + # If this fails, don't bother closing write again + @log_courier_mode = 'r' + Timeout.timeout(30) do + # Close and wait + @log_courier.close_write + @log_courier_reader.join + end + rescue Timeout::Error + fail "Log-courier did not shutdown on stdin EOF" + end + end + def shutdown puts 'Shutting down Log Courier' return if @log_courier.nil? diff --git a/src/lc-lib/core/config.go b/src/lc-lib/core/config.go index a63dad52..60a80244 100644 --- a/src/lc-lib/core/config.go +++ b/src/lc-lib/core/config.go @@ -28,6 +28,7 @@ import ( "os" "path/filepath" "reflect" + "strings" "time" ) @@ -48,8 +49,8 @@ const ( default_NetworkConfig_Timeout time.Duration = 15 * time.Second default_NetworkConfig_Reconnect time.Duration = 1 * time.Second default_NetworkConfig_MaxPendingPayloads int64 = 10 - default_FileConfig_Codec string = "plain" - default_FileConfig_DeadTime int64 = 86400 + default_StreamConfig_Codec string = "plain" + default_StreamConfig_DeadTime int64 = 86400 ) var ( @@ -61,6 +62,7 @@ type Config struct { Network NetworkConfig `config:"network"` Files []FileConfig `config:"files"` Includes []string `config:"includes"` + Stdin StreamConfig `config:"stdin"` } type GeneralConfig struct { @@ -97,8 +99,7 @@ type CodecConfigStub struct { Unused map[string]interface{} } -type FileConfig struct { - Paths []string `config:"paths"` +type StreamConfig struct { Fields map[string]interface{} `config:"fields"` Codec CodecConfigStub `config:"codec"` DeadTime time.Duration `config:"dead time"` @@ -106,6 +107,12 @@ type FileConfig struct { CodecFactory CodecFactory } +type FileConfig struct { + Paths []string `config:"paths"` + + StreamConfig `config:",embed"` +} + func NewConfig() *Config { return &Config{} } @@ -343,27 +350,38 @@ func (c *Config) Load(path string) (err error) { } for k := range c.Files { - if c.Files[k].Codec.Name == "" { - c.Files[k].Codec.Name = default_FileConfig_Codec - } - - if registrar_func, ok := registered_Codecs[c.Files[k].Codec.Name]; ok { - if c.Files[k].CodecFactory, err = registrar_func(c, fmt.Sprintf("/files[%d]/codec/", k), c.Files[k].Codec.Unused, c.Files[k].Codec.Name); err != nil { - return - } - } else { - err = fmt.Errorf("Unrecognised codec '%s' for 'files' entry %d", c.Files[k].Codec.Name, k) + if err = c.initStreamConfig(fmt.Sprintf("/files[%d]/codec/", k), &c.Files[k].StreamConfig); err != nil { return } + } + + if err = c.initStreamConfig("/stdin", &c.Stdin); err != nil { + return + } - if c.Files[k].DeadTime == time.Duration(0) { - c.Files[k].DeadTime = time.Duration(default_FileConfig_DeadTime) * time.Second + return +} + +func (c *Config) initStreamConfig(path string, stream_config *StreamConfig) (err error) { + if stream_config.Codec.Name == "" { + stream_config.Codec.Name = default_StreamConfig_Codec + } + + if registrar_func, ok := registered_Codecs[stream_config.Codec.Name]; ok { + if stream_config.CodecFactory, err = registrar_func(c, path, stream_config.Codec.Unused, stream_config.Codec.Name); err != nil { + return } + } else { + return fmt.Errorf("Unrecognised codec '%s' for %s", stream_config.Codec.Name, path) + } - // TODO: Event transmit length is uint32, if fields length is rediculous we will fail + if stream_config.DeadTime == time.Duration(0) { + stream_config.DeadTime = time.Duration(default_StreamConfig_DeadTime) * time.Second } - return + // TODO: EDGE CASE: Event transmit length is uint32, if fields length is rediculous we will fail + + return nil } // TODO: This should be pushed to a wrapper or module @@ -373,6 +391,7 @@ func (c *Config) Load(path string) (err error) { // We can then take the unused configuration dynamically at runtime based on another value func (c *Config) PopulateConfig(config interface{}, config_path string, raw_config map[string]interface{}) (err error) { vconfig := reflect.ValueOf(config).Elem() +FieldLoop: for i := 0; i < vconfig.NumField(); i++ { field := vconfig.Field(i) if !field.CanSet() { @@ -380,6 +399,17 @@ func (c *Config) PopulateConfig(config interface{}, config_path string, raw_conf } fieldtype := vconfig.Type().Field(i) tag := fieldtype.Tag.Get("config") + mods := strings.Split(tag, ",") + tag = mods[0] + mods = mods[1:] + for _, mod := range mods { + if mod == "embed" && field.Kind() == reflect.Struct { + if err = c.PopulateConfig(field.Addr().Interface(), config_path, raw_config); err != nil { + return + } + continue FieldLoop + } + } if tag == "" { continue } diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index f9cf6637..6767f147 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -36,18 +36,18 @@ type HarvesterFinish struct { type Harvester struct { sync.RWMutex - stop_chan chan interface{} - return_chan chan *HarvesterFinish - stream core.Stream - fileinfo os.FileInfo - path string - config *core.Config - fileconfig *core.FileConfig - offset int64 - output chan<- *core.EventDescriptor - codec core.Codec - file *os.File - split bool + stop_chan chan interface{} + return_chan chan *HarvesterFinish + stream core.Stream + fileinfo os.FileInfo + path string + config *core.Config + stream_config *core.StreamConfig + offset int64 + output chan<- *core.EventDescriptor + codec core.Codec + file *os.File + split bool line_speed float64 byte_speed float64 @@ -57,7 +57,7 @@ type Harvester struct { last_eof *time.Time } -func NewHarvester(stream core.Stream, config *core.Config, fileconfig *core.FileConfig, offset int64) *Harvester { +func NewHarvester(stream core.Stream, config *core.Config, stream_config *core.StreamConfig, offset int64) *Harvester { var fileinfo os.FileInfo var path string @@ -70,18 +70,18 @@ func NewHarvester(stream core.Stream, config *core.Config, fileconfig *core.File } ret := &Harvester{ - stop_chan: make(chan interface{}), - return_chan: make(chan *HarvesterFinish, 1), - stream: stream, - fileinfo: fileinfo, - path: path, - config: config, - fileconfig: fileconfig, - offset: offset, - last_eof: nil, + stop_chan: make(chan interface{}), + return_chan: make(chan *HarvesterFinish, 1), + stream: stream, + fileinfo: fileinfo, + path: path, + config: config, + stream_config: stream_config, + offset: offset, + last_eof: nil, } - ret.codec = fileconfig.CodecFactory.NewCodec(ret.eventCallback, ret.offset) + ret.codec = stream_config.CodecFactory.NewCodec(ret.eventCallback, ret.offset) return ret } @@ -240,7 +240,7 @@ ReadLoop: continue } - if age := time.Since(last_read_time); age > h.fileconfig.DeadTime { + if age := time.Since(last_read_time); age > h.stream_config.DeadTime { // if last_read_time was more than dead time, this file is probably dead. Stop watching it. log.Info("Stopping harvest of %s; last change was %v ago", h.path, age-(age%time.Second)) // TODO: We should return a Stat() from before we attempted to read @@ -262,8 +262,8 @@ func (h *Harvester) eventCallback(start_offset int64, end_offset int64, text str "offset": start_offset, "message": text, } - for k := range h.fileconfig.Fields { - event[k] = h.fileconfig.Fields[k] + for k := range h.stream_config.Fields { + event[k] = h.stream_config.Fields[k] } // If we split any of the line data, tag it @@ -391,8 +391,8 @@ func (h *Harvester) Snapshot() *core.Snapshot { ret.AddEntry("Status", "Alive") } else { ret.AddEntry("Status", "Idle") - if age := time.Since(*h.last_eof); age < h.fileconfig.DeadTime { - ret.AddEntry("Dead timer", h.fileconfig.DeadTime-age) + if age := time.Since(*h.last_eof); age < h.stream_config.DeadTime { + ret.AddEntry("Dead timer", h.stream_config.DeadTime-age) } else { ret.AddEntry("Dead timer", "0s") } diff --git a/src/lc-lib/prospector/info.go b/src/lc-lib/prospector/info.go index 52ea34fc..63477551 100644 --- a/src/lc-lib/prospector/info.go +++ b/src/lc-lib/prospector/info.go @@ -98,12 +98,6 @@ func (pi *prospectorInfo) stop() { if !pi.running { return } - if pi.file == "-" { - // Just in case someone started us outside a pipeline with stdin - // to stop confusion at why we don't exit after Ctrl+C - // There's no deadline on Stdin reads :-( - log.Notice("Waiting for Stdin to close (EOF or Ctrl+D)") - } pi.harvester.Stop() } diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index 7e0c6a0d..280507c2 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -100,38 +100,6 @@ func (p *Prospector) Run() { p.Done() }() - // Handle any "-" (stdin) paths - but only once - stdin_started := false - for config_k, config := range p.config.Files { - for i, path := range config.Paths { - if path == "-" { - if !stdin_started { - // We need to check err - we cannot allow a nil stat - stat, err := os.Stdin.Stat() - if err != nil { - log.Error("stat(Stdin) failed: %s", err) - continue - } - - // Stdin is implicitly an orphaned fileinfo - info := newProspectorInfoFromFileInfo("-", stat) - info.orphaned = Orphaned_Yes - - // Store the reference so we can shut it down later - p.prospectors[info] = info - - // Start the harvester - p.startHarvesterWithOffset(info, &p.config.Files[config_k], 0) - - stdin_started = true - } - - // Remove it from the file list - config.Paths = append(config.Paths[:i], config.Paths[i+1:]...) - } - } - } - ProspectLoop: for { newlastscan := time.Now() @@ -406,7 +374,7 @@ func (p *Prospector) startHarvester(info *prospectorInfo, fileconfig *core.FileC func (p *Prospector) startHarvesterWithOffset(info *prospectorInfo, fileconfig *core.FileConfig, offset int64) { // TODO - hook in a shutdown channel - info.harvester = harvester.NewHarvester(info, p.config, fileconfig, offset) + info.harvester = harvester.NewHarvester(info, p.config, &fileconfig.StreamConfig, offset) info.running = true info.status = Status_Ok info.harvester.Start(p.output) diff --git a/src/lc-lib/publisher/publisher.go b/src/lc-lib/publisher/publisher.go index 84f276ea..e5b3c17b 100644 --- a/src/lc-lib/publisher/publisher.go +++ b/src/lc-lib/publisher/publisher.go @@ -47,6 +47,28 @@ const ( Status_Reconnecting ) +type EventSpool interface { + Close() + Add(registrar.RegistrarEvent) + Send() +} + +type NullEventSpool struct { +} + +func newNullEventSpool() EventSpool { + return &NullEventSpool{} +} + +func (s *NullEventSpool) Close() { +} + +func (s *NullEventSpool) Add(event registrar.RegistrarEvent) { +} + +func (s *NullEventSpool) Send() { +} + type Publisher struct { core.PipelineSegment core.PipelineConfigReceiver @@ -65,7 +87,7 @@ type Publisher struct { num_payloads int64 out_of_sync int input chan []*core.EventDescriptor - registrar_spool *registrar.RegistrarEventSpool + registrar_spool EventSpool shutdown bool line_count int64 retry_count int64 @@ -78,11 +100,16 @@ type Publisher struct { last_measurement time.Time } -func NewPublisher(pipeline *core.Pipeline, config *core.NetworkConfig, registrar_imp *registrar.Registrar) (*Publisher, error) { +func NewPublisher(pipeline *core.Pipeline, config *core.NetworkConfig, registrar *registrar.Registrar) (*Publisher, error) { ret := &Publisher{ config: config, input: make(chan []*core.EventDescriptor, 1), - registrar_spool: registrar_imp.Connect(), + } + + if registrar == nil { + ret.registrar_spool = newNullEventSpool() + } else { + ret.registrar_spool = registrar.Connect() } if err := ret.init(); err != nil { diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index c600b497..b17ce995 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -25,6 +25,7 @@ import ( "github.com/op/go-logging" "lc-lib/admin" "lc-lib/core" + "lc-lib/harvester" "lc-lib/prospector" "lc-lib/spooler" "lc-lib/publisher" @@ -49,7 +50,9 @@ type LogCourier struct { shutdown_chan chan os.Signal reload_chan chan os.Signal config_file string + stdin bool from_beginning bool + harvester *harvester.Harvester log_file *os.File last_snapshot time.Time snapshot *core.Snapshot @@ -65,33 +68,47 @@ func NewLogCourier() *LogCourier { func (lc *LogCourier) Run() { var admin_listener *admin.Listener var on_command <-chan string + var harvester_wait <-chan *harvester.HarvesterFinish + var registrar_imp *registrar.Registrar lc.startUp() log.Info("Log Courier version %s pipeline starting", core.Log_Courier_Version) - if lc.config.General.AdminEnabled { - var err error + // If reading from stdin, skip admin, and set up a null registrar + if lc.stdin { + registrar_imp = nil + } else { + if lc.config.General.AdminEnabled { + var err error - admin_listener, err = admin.NewListener(lc.pipeline, &lc.config.General) - if err != nil { - log.Fatalf("Failed to initialise: %s", err) + admin_listener, err = admin.NewListener(lc.pipeline, &lc.config.General) + if err != nil { + log.Fatalf("Failed to initialise: %s", err) + } + + on_command = admin_listener.OnCommand() } - on_command = admin_listener.OnCommand() + registrar_imp = registrar.NewRegistrar(lc.pipeline, lc.config.General.PersistDir) } - registrar := registrar.NewRegistrar(lc.pipeline, lc.config.General.PersistDir) - - publisher, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar) + publisher, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar_imp) if err != nil { log.Fatalf("Failed to initialise: %s", err) } - spooler := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher) + spooler_imp := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher) - if _, err := prospector.NewProspector(lc.pipeline, lc.config, lc.from_beginning, registrar, spooler); err != nil { - log.Fatalf("Failed to initialise: %s", err) + // If reading from stdin, don't start prospector, directly start a harvester + if lc.stdin { + lc.harvester = harvester.NewHarvester(nil, lc.config, &lc.config.Stdin, 0) + lc.harvester.Start(spooler_imp.Connect()) + harvester_wait = lc.harvester.OnFinish() + } else { + if _, err := prospector.NewProspector(lc.pipeline, lc.config, lc.from_beginning, registrar_imp, spooler_imp); err != nil { + log.Fatalf("Failed to initialise: %s", err) + } } // Start the pipeline @@ -113,6 +130,15 @@ SignalLoop: lc.reloadConfig() case command := <-on_command: admin_listener.Respond(lc.processCommand(command)) + case finished := <-harvester_wait: + if finished.Error != nil { + log.Notice("An error occurred reading from stdin at offset %d: %s", finished.Last_Offset, finished.Error) + } else { + log.Notice("Finished reading from stdin at offset %d", finished.Last_Offset) + } + lc.harvester = nil + lc.cleanShutdown() + break SignalLoop } } @@ -135,6 +161,7 @@ func (lc *LogCourier) startUp() { flag.StringVar(&cpu_profile, "cpuprofile", "", "write cpu profile to file") flag.StringVar(&lc.config_file, "config", "", "The config file to load") + flag.BoolVar(&lc.stdin, "stdin", false, "Read from stdin instead of files listed in the config file") flag.BoolVar(&lc.from_beginning, "from-beginning", false, "On first run, read new files from the beginning instead of the end") flag.Parse() @@ -236,8 +263,10 @@ func (lc *LogCourier) loadConfig() error { return err } - if len(lc.config.Files) == 0 { - return fmt.Errorf("No file groups were found in the configuration.") + if lc.stdin { + // TODO: Where to find stdin config for codec and fields? + } else if len(lc.config.Files) == 0 { + log.Warning("No file groups were found in the configuration.") } return nil @@ -281,6 +310,13 @@ func (lc *LogCourier) processCommand(command string) *admin.Response { func (lc *LogCourier) cleanShutdown() { log.Notice("Initiating shutdown") + + if lc.harvester != nil { + lc.harvester.Stop() + finished := <-lc.harvester.OnFinish() + log.Notice("Aborted reading from stdin at offset %d", finished.Last_Offset) + } + lc.pipeline.Shutdown() lc.pipeline.Wait() } From 9d76ee51fd1edfd921915a01b9e9eb08663f1eb3 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 10 Jan 2015 16:15:49 +0000 Subject: [PATCH 03/75] Implement random selection of the initial TCP server - subsequent attempts are performed in configuration file order Closes #82 --- docs/Configuration.md | 5 +++++ src/lc-lib/transports/tcp.go | 12 +++++++++++- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index 000cf660..3e75af6b 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -359,6 +359,11 @@ Sets the list of servers to send logs to. DNS names are resolved into IP addresses each time connections are made and all available IP addresses are used. +Only the initial server is randomly selected. Subsequent connection attempts are +made to the next IP address available (if the server had multiple IP addresses) +or to the next server listed in the configuration file (if all addresses for the +previous server were exausted.) + ### `"ssl ca"` *Filepath. Required with "transport" = "tls". Not allowed otherwise* diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index fb8d9ec6..cc9f7f8a 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -136,7 +136,17 @@ func NewTcpTransportFactory(config *core.Config, config_path string, unused map[ } func (f *TransportTcpFactory) NewTransport(config *core.NetworkConfig) (core.Transport, error) { - return &TransportTcp{config: f, net_config: config}, nil + ret := &TransportTcp{ + config: f, + net_config: config, + } + + // Randomise the initial host - after this it will round robin + // Round robin after initial attempt ensures we don't retry same host twice, + // and also ensures we try all hosts one by one + ret.roundrobin = rand.Intn(len(config.Servers)) + + return ret, nil } func (t *TransportTcp) ReloadConfig(new_net_config *core.NetworkConfig) int { From 814fc55abc784ccada7c3130ff234bad15161b6a Mon Sep 17 00:00:00 2001 From: mhughes Date: Wed, 14 Jan 2015 15:45:59 -0500 Subject: [PATCH 04/75] Allow certificate authorities with intermediates. --- src/lc-lib/transports/tcp.go | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index cc9f7f8a..ccef9427 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -25,7 +25,6 @@ import ( "crypto/x509" "encoding/binary" "encoding/pem" - "errors" "fmt" "io/ioutil" "lc-lib/core" @@ -105,26 +104,29 @@ func NewTcpTransportFactory(config *core.Config, config_path string, unused map[ if len(ret.SSLCA) > 0 { ret.tls_config.RootCAs = x509.NewCertPool() - pemdata, err := ioutil.ReadFile(ret.SSLCA) if err != nil { - return nil, fmt.Errorf("Failure reading CA certificate: %s", err) - } - - block, _ := pem.Decode(pemdata) - if block == nil { - return nil, errors.New("Failed to decode CA certificate data") + return nil, fmt.Errorf("Failure reading CA certificate: %s\n", err) } - if block.Type != "CERTIFICATE" { - return nil, fmt.Errorf("Specified CA certificate is not a certificate: %s", ret.SSLCA) - } - - cert, err := x509.ParseCertificate(block.Bytes) - if err != nil { - return nil, fmt.Errorf("Failed to parse CA certificate: %s", err) + rest := pemdata + var block *pem.Block + var pemBlockNum = 1 + for { + block, rest = pem.Decode(rest) + if block != nil { + if block.Type != "CERTIFICATE" { + return nil, fmt.Errorf("Block %d does not contain a certificate: %s\n", pemBlockNum, ret.SSLCA) + } + cert, err := x509.ParseCertificate(block.Bytes) + if err != nil { + return nil, fmt.Errorf("Failed to parse CA certificate in block %d: %s\n", pemBlockNum, ret.SSLCA) + } + ret.tls_config.RootCAs.AddCert(cert) + pemBlockNum += 1 + } else { + break + } } - - ret.tls_config.RootCAs.AddCert(cert) } } else { if err := config.ReportUnusedConfig(config_path, unused); err != nil { From a7d51927508104de93bddc741769cdec1bf6cf96 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 17 Jan 2015 14:15:18 +0000 Subject: [PATCH 05/75] Disable SSLv2 and SSLv3 --- lib/log-courier/client_tls.rb | 9 +++++++++ lib/log-courier/server_tcp.rb | 10 ++++++++++ src/lc-lib/transports/tcp.go | 3 +++ 3 files changed, 22 insertions(+) diff --git a/lib/log-courier/client_tls.rb b/lib/log-courier/client_tls.rb index dec33e85..6ebe68ee 100644 --- a/lib/log-courier/client_tls.rb +++ b/lib/log-courier/client_tls.rb @@ -202,6 +202,15 @@ def tls_connect ssl = OpenSSL::SSL::SSLContext.new + # Disable SSLv2 and SSLv3 + # Call set_params first to ensure options attribute is there (hmmmm?) + ssl.set_params + # Modify the default options to ensure SSLv2 and SSLv3 is disabled + # This retains any beneficial options set by default in the current Ruby implementation + ssl.options |= OpenSSL::SSL::OP_NO_SSLv2 if defined?(OpenSSL::SSL::OP_NO_SSLv2) + ssl.options |= OpenSSL::SSL::OP_NO_SSLv3 if defined?(OpenSSL::SSL::OP_NO_SSLv3) + + # Set the certificate file unless @options[:ssl_certificate].nil? ssl.cert = OpenSSL::X509::Certificate.new(File.read(@options[:ssl_certificate])) ssl.key = OpenSSL::PKey::RSA.new(File.read(@options[:ssl_key]), @options[:ssl_key_passphrase]) diff --git a/lib/log-courier/server_tcp.rb b/lib/log-courier/server_tcp.rb index 8ef264c3..8cb7f4ac 100644 --- a/lib/log-courier/server_tcp.rb +++ b/lib/log-courier/server_tcp.rb @@ -87,6 +87,16 @@ def initialize(options = {}) if @options[:transport] == 'tls' ssl = OpenSSL::SSL::SSLContext.new + + # Disable SSLv2 and SSLv3 + # Call set_params first to ensure options attribute is there (hmmmm?) + ssl.set_params + # Modify the default options to ensure SSLv2 and SSLv3 is disabled + # This retains any beneficial options set by default in the current Ruby implementation + ssl.options |= OpenSSL::SSL::OP_NO_SSLv2 if defined?(OpenSSL::SSL::OP_NO_SSLv2) + ssl.options |= OpenSSL::SSL::OP_NO_SSLv3 if defined?(OpenSSL::SSL::OP_NO_SSLv3) + + # Set the certificate file ssl.cert = OpenSSL::X509::Certificate.new(File.read(@options[:ssl_certificate])) ssl.key = OpenSSL::PKey::RSA.new(File.read(@options[:ssl_key]), @options[:ssl_key_passphrase]) diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index cc9f7f8a..354beb86 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -223,6 +223,9 @@ func (t *TransportTcp) Init() error { // Now wrap in TLS if this is the "tls" transport if t.config.transport == "tls" { + // Disable SSLv3 (mitigate POODLE vulnerability) + t.config.tls_config.MinVersion = tls.VersionTLS10 + // Set the tlsconfig server name for server validation (required since Go 1.3) t.config.tls_config.ServerName = t.host From c8ba56a77af424f2dbfb967455b37c76e1ae30c3 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 17 Jan 2015 14:15:29 +0000 Subject: [PATCH 06/75] Fix jruby rspec test --- spec/lib/helpers/common.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/lib/helpers/common.rb b/spec/lib/helpers/common.rb index bf93570d..5ae29a27 100644 --- a/spec/lib/helpers/common.rb +++ b/spec/lib/helpers/common.rb @@ -194,7 +194,7 @@ def receive_and_check(args = {}, &block) if block.nil? found = @files.find do |f| next unless f.pending? - f.logged?(event: e, **args) + f.logged?({event: e}.merge!(args)) end expect(found).to_not be_nil, "Event received not recognised: #{e}" else From d68d4fec50d44960404ffad40eaa4cf0254e6685 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 17 Jan 2015 16:31:44 +0000 Subject: [PATCH 07/75] Log file reopen on configuration reload, fixes #91 --- src/log-courier/log-courier.go | 12 +++++-- src/log-courier/logging.go | 66 ++++++++++++++++++++++++++++++++-- 2 files changed, 73 insertions(+), 5 deletions(-) diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index c600b497..2919268e 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -50,7 +50,7 @@ type LogCourier struct { reload_chan chan os.Signal config_file string from_beginning bool - log_file *os.File + log_file *DefaultLogBackend last_snapshot time.Time snapshot *core.Snapshot } @@ -209,12 +209,12 @@ func (lc *LogCourier) configureLogging() (err error) { // Log file? if lc.config.General.LogFile != "" { - lc.log_file, err = os.OpenFile(lc.config.General.LogFile, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0640) + lc.log_file, err = NewDefaultLogBackend(lc.config.General.LogFile, "", stdlog.LstdFlags|stdlog.Lmicroseconds) if err != nil { return } - backends = append(backends, logging.NewLogBackend(lc.log_file, "", stdlog.LstdFlags|stdlog.Lmicroseconds)) + backends = append(backends, lc.log_file) } if err = lc.configureLoggingPlatform(&backends); err != nil { @@ -253,6 +253,12 @@ func (lc *LogCourier) reloadConfig() error { // Update the log level logging.SetLevel(lc.config.General.LogLevel, "") + // Reopen the log file if we specified one + if lc.log_file != nil { + lc.log_file.Reopen() + log.Notice("Log file reopened") + } + // Pass the new config to the pipeline workers lc.pipeline.SendConfig(lc.config) diff --git a/src/log-courier/logging.go b/src/log-courier/logging.go index 2142e3ba..b7c48060 100644 --- a/src/log-courier/logging.go +++ b/src/log-courier/logging.go @@ -16,10 +16,72 @@ package main -import "github.com/op/go-logging" +import ( + "github.com/op/go-logging" + golog "log" + "io/ioutil" + "os" +) var log *logging.Logger func init() { - log = logging.MustGetLogger("log-courier") + log = logging.MustGetLogger("log-courier") +} + +type DefaultLogBackend struct { + file *os.File + path string +} + +func NewDefaultLogBackend(path string, prefix string, flag int) (*DefaultLogBackend, error) { + ret := &DefaultLogBackend{ + path: path, + } + + golog.SetPrefix(prefix) + golog.SetFlags(flag) + + err := ret.Reopen() + if err != nil { + return nil, err + } + + return ret, nil +} + +func (f *DefaultLogBackend) Log(level logging.Level, calldepth int, rec *logging.Record) error { + golog.Print(rec.Formatted(calldepth+1)) + return nil +} + +func (f *DefaultLogBackend) Reopen() (err error) { + var new_file *os.File + + new_file, err = os.OpenFile(f.path, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0640) + if err != nil { + return + } + + // Switch to new output before closing + golog.SetOutput(new_file) + + if f.file != nil { + f.file.Close() + } + + f.file = new_file + + return nil +} + +func (f *DefaultLogBackend) Close() { + // Discard logs before closing + golog.SetOutput(ioutil.Discard) + + if f.file != nil { + f.file.Close() + } + + f.file = nil } From 891a4956babfcf1cdbf0035ba8be89b40e6f3b50 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 17 Jan 2015 17:48:14 +0000 Subject: [PATCH 08/75] Update documentation --- README.md | 2 +- docs/AdministrationUtility.md | 2 +- docs/ChangeLog.md | 32 +++++++++++++++++++++++++++++++- docs/CommandLineArguments.md | 6 +++--- docs/Configuration.md | 5 ++++- docs/LogstashIntegration.md | 2 +- docs/codecs/Filter.md | 2 +- docs/codecs/Multiline.md | 2 +- docs/examples/example-stdin.conf | 9 +++------ 9 files changed, 46 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 6cdebc9c..c2d43670 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ with many fixes and behavioural improvements. -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Features](#features) - [Installation](#installation) diff --git a/docs/AdministrationUtility.md b/docs/AdministrationUtility.md index f822bf56..cc9e0861 100644 --- a/docs/AdministrationUtility.md +++ b/docs/AdministrationUtility.md @@ -2,7 +2,7 @@ -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Overview](#overview) - [Available Commands](#available-commands) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 2b97a428..c5688bbb 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -2,8 +2,9 @@ -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* +- [1.4](#14) - [1.3](#13) - [1.2](#12) - [1.1](#11) @@ -19,6 +20,35 @@ +## 1.4 + +*???* + +***Breaking Changes*** + +* The way in which logs are read from stdin has been significantly changed. The +"-" path in the configuration is no longer special and no longer reads from +stdin. Instead, you must now start log-courier with the `-stdin` command line +argument, and configure the codec and additional fields in the new `stdin` +configuration file section. Log Courier will now also exit cleanly once all data +from stdin has been read and acknowledged by the server (previously it would +hang forever.) + +***Changes*** + +* Implement random selection of the initial server connection. This partly +reverts a change made in version 1.2. Subsequent connections due to connection +failures will still round robin. +* Allow use of certificate files containing intermediates within the Log Courier +configuration. (Thanks @mhughes - #88) +* A configuration reload will now reopen log files. (#91) +* Implement support for SRV record server entries (#85) + +***Security*** + +* SSLv2 and SSLv3 are now explicitly disabled in Log Courier and the logstash +courier plugins to further enhance security when using the TLS transport. + ## 1.3 *2nd January 2014* diff --git a/docs/CommandLineArguments.md b/docs/CommandLineArguments.md index 49816a81..a65575ba 100644 --- a/docs/CommandLineArguments.md +++ b/docs/CommandLineArguments.md @@ -2,12 +2,12 @@ -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Overview](#overview) -- [`-config=`](#-config=path) +- [`-config=`](#-configpath) - [`-config-test`](#-config-test) -- [`-cpuprofile=`](#-cpuprofile=path) +- [`-cpuprofile=`](#-cpuprofilepath) - [`-from-beginning`](#-from-beginning) - [`-list-supported`](#-list-supported) - [`-stdin`](#-stdin) diff --git a/docs/Configuration.md b/docs/Configuration.md index 20473251..c0b31c91 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -2,7 +2,7 @@ -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Overview](#overview) - [Reloading](#reloading) @@ -509,6 +509,9 @@ globs will be tailed. See above for a description of the Fileglob field type. +*To read from stdin, see the [`-stdin`](CommandLineArguments.md#stdin) command +line argument.* + Examples: * `[ "/var/log/*.log" ]` diff --git a/docs/LogstashIntegration.md b/docs/LogstashIntegration.md index f2df4e84..a03a5813 100644 --- a/docs/LogstashIntegration.md +++ b/docs/LogstashIntegration.md @@ -5,7 +5,7 @@ Log Courier is built to work seamlessly with [Logstash](http://logstash.net) -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Installation](#installation) - [Logstash 1.5+ Plugin Manager](#logstash-15-plugin-manager) diff --git a/docs/codecs/Filter.md b/docs/codecs/Filter.md index 111dac8b..b46b165a 100644 --- a/docs/codecs/Filter.md +++ b/docs/codecs/Filter.md @@ -4,7 +4,7 @@ The filter codec strips out unwanted events, shipping only those desired. -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Example](#example) - [Options](#options) diff --git a/docs/codecs/Multiline.md b/docs/codecs/Multiline.md index 93504deb..5a082dd0 100644 --- a/docs/codecs/Multiline.md +++ b/docs/codecs/Multiline.md @@ -8,7 +8,7 @@ option. -**Table of Contents** *generated with [DocToc](http://doctoc.herokuapp.com/)* +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* - [Example](#example) - [Options](#options) diff --git a/docs/examples/example-stdin.conf b/docs/examples/example-stdin.conf index 6b23dc6a..e7832d3e 100644 --- a/docs/examples/example-stdin.conf +++ b/docs/examples/example-stdin.conf @@ -3,10 +3,7 @@ "servers": [ "localhost:5043" ], "ssl ca": "./logstash.cer" }, - "files": [ - { - "paths": [ "-" ], - "fields": { "type": "stdin" } - } - ] + "stdin": { + "fields": { "type": "stdin" } + } } From 2a81971e795db91fbb085ac3c072931d16a48519 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:27:56 +0000 Subject: [PATCH 09/75] Prevent gem crash when no logger is provided --- lib/log-courier/client.rb | 2 +- lib/log-courier/server.rb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index b77eb1a8..e01e6b43 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -97,7 +97,7 @@ def initialize(options = {}) }.merge!(options) @logger = @options[:logger] - @logger['plugin'] = 'output/courier' + @logger['plugin'] = 'output/courier' unless @logger.nil? require 'log-courier/client_tls' @client = ClientTls.new(@options) diff --git a/lib/log-courier/server.rb b/lib/log-courier/server.rb index 30d8312c..7f915fee 100644 --- a/lib/log-courier/server.rb +++ b/lib/log-courier/server.rb @@ -40,7 +40,7 @@ def initialize(options = {}) }.merge!(options) @logger = @options[:logger] - @logger['plugin'] = 'input/courier' + @logger['plugin'] = 'input/courier' unless @logger.nil? case @options[:transport] when 'tcp', 'tls' From c9ace78715cabeda5e311f7536adce5a9d772fdc Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:28:34 +0000 Subject: [PATCH 10/75] Resolve namespace conflict. Fixes #96 --- lib/logstash/outputs/courier.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/logstash/outputs/courier.rb b/lib/logstash/outputs/courier.rb index b72fcb7c..ea0d34c6 100644 --- a/lib/logstash/outputs/courier.rb +++ b/lib/logstash/outputs/courier.rb @@ -20,7 +20,7 @@ module LogStash module Outputs # Send events using the Log Courier protocol - class LogCourier < LogStash::Outputs::Base + class Courier < LogStash::Outputs::Base config_name 'courier' milestone 1 From 2f652affeb34175782da91d923f830d0557d62c4 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:28:50 +0000 Subject: [PATCH 11/75] Normalise plugin startup sequences --- lib/logstash/inputs/courier.rb | 2 +- lib/logstash/outputs/courier.rb | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/lib/logstash/inputs/courier.rb b/lib/logstash/inputs/courier.rb index 57d35c83..8aaf22dc 100644 --- a/lib/logstash/inputs/courier.rb +++ b/lib/logstash/inputs/courier.rb @@ -79,7 +79,7 @@ class Courier < LogStash::Inputs::Base public def register - @logger.info('Starting courier input listener', :address => "#{@host}:#{@port}") + @logger.info 'Starting courier input listener', :address => "#{@host}:#{@port}" options = { logger: @logger, diff --git a/lib/logstash/outputs/courier.rb b/lib/logstash/outputs/courier.rb index ea0d34c6..a87b1d2f 100644 --- a/lib/logstash/outputs/courier.rb +++ b/lib/logstash/outputs/courier.rb @@ -51,9 +51,10 @@ class Courier < LogStash::Outputs::Base public def register - require 'log-courier/client' + @logger.info 'Starting courier output' - @client = LogCourier::Client.new( + options = { + logger: @logger, addresses: @hosts, port: @port, ssl_ca: @ssl_ca, @@ -61,8 +62,11 @@ def register ssl_key: @ssl_key, ssl_key_passphrase: @ssl_key_passphrase, spool_size: @spool_size, - idle_timeout: @idle_timeout - ) + idle_timeout: @idle_timeout, + } + + require 'log-courier/client' + @client = LogCourier::Client.new(options) end public From 8a164060a248cda80268721044ce206fba478f01 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:39:29 +0000 Subject: [PATCH 12/75] Fix_version no longer creates repository modification, making contributions easier, (reported in #88) --- .gitignore | 30 +++++++++++-------- build/fix_version | 10 ++----- ...ourier.gemspec => log-courier.gemspec.tmpl | 2 +- ...=> logstash-input-log-courier.gemspec.tmpl | 4 +-- ...> logstash-output-log-courier.gemspec.tmpl | 4 +-- src/lc-lib/core/version.go | 19 ------------ .../{version.go.template => version.go.tmpl} | 0 7 files changed, 24 insertions(+), 45 deletions(-) rename log-courier.gemspec => log-courier.gemspec.tmpl (96%) rename logstash-input-log-courier.gemspec => logstash-input-log-courier.gemspec.tmpl (88%) rename logstash-output-log-courier.gemspec => logstash-output-log-courier.gemspec.tmpl (88%) delete mode 100644 src/lc-lib/core/version.go rename src/lc-lib/core/{version.go.template => version.go.tmpl} (100%) diff --git a/.gitignore b/.gitignore index 84c75c4d..dc2c9a9f 100644 --- a/.gitignore +++ b/.gitignore @@ -1,13 +1,17 @@ -Gemfile.lock -bin -node_modules -pkg -spec/tmp -src/github.com -vendor -version.txt -.Makefile.tags -.bundle -.log-courier -.vagrant -*.gem +/.Makefile.tags +/.bundle +/.log-courier +/.vagrant +/*.gem +/Gemfile.lock +/bin +/log-courier.gemspec +/logstash-input-log-courier.gemspec +/logstash-output-log-courier.gemspec +/node_modules +/pkg +/spec/tmp +/src/github.com +/src/lc-lib/core/version.go +/vendor +/version.txt diff --git a/build/fix_version b/build/fix_version index 006ec5f5..231b360e 100755 --- a/build/fix_version +++ b/build/fix_version @@ -15,17 +15,11 @@ else fi # Patch version.go -sed "s/\\(const *Log_Courier_Version *string *= *\"\\)[^\"]*\\(\"\\)/\\1${VERSION}\\2/g" src/lc-lib/core/version.go > src/lc-lib/core/version.go.tmp -\mv -f src/lc-lib/core/version.go.tmp src/lc-lib/core/version.go +sed "s//${VERSION}/g" src/lc-lib/core/version.go.tmpl > src/lc-lib/core/version.go # Patch the gemspecs for GEM in log-courier logstash-input-log-courier logstash-output-log-courier; do - sed "s/\\(gem.version *= *'\\)[^']*\\('\\)/\\1${VERSION}\\2/g" ${GEM}.gemspec > ${GEM}.gemspec.tmp - \mv -f ${GEM}.gemspec.tmp ${GEM}.gemspec - [ ${GEM#logstash-} != $GEM ] && { - sed "s/\\(gem.add_runtime_dependency *'log-courier' *, *'= *\\)[^']*\\('\\)/\\1${VERSION}\\2/g" ${GEM}.gemspec > ${GEM}.gemspec.tmp - \mv -f ${GEM}.gemspec.tmp ${GEM}.gemspec - } + sed "s//${VERSION}/g" ${GEM}.gemspec.tmpl > ${GEM}.gemspec done echo "${VERSION}" > version.txt diff --git a/log-courier.gemspec b/log-courier.gemspec.tmpl similarity index 96% rename from log-courier.gemspec rename to log-courier.gemspec.tmpl index 242d474a..ad6ae7e7 100644 --- a/log-courier.gemspec +++ b/log-courier.gemspec.tmpl @@ -1,6 +1,6 @@ Gem::Specification.new do |gem| gem.name = 'log-courier' - gem.version = '1.3' + gem.version = '' gem.description = 'Log Courier library' gem.summary = 'Receive events from Log Courier and transmit between LogStash instances' gem.homepage = 'https://github.com/driskell/log-courier' diff --git a/logstash-input-log-courier.gemspec b/logstash-input-log-courier.gemspec.tmpl similarity index 88% rename from logstash-input-log-courier.gemspec rename to logstash-input-log-courier.gemspec.tmpl index 638af4a5..08cb77e8 100644 --- a/logstash-input-log-courier.gemspec +++ b/logstash-input-log-courier.gemspec.tmpl @@ -1,6 +1,6 @@ Gem::Specification.new do |gem| gem.name = 'logstash-input-log-courier' - gem.version = '1.3' + gem.version = '' gem.description = 'Log Courier Input Logstash Plugin' gem.summary = 'Receive events from Log Courier and Logstash using the Log Courier protocol' gem.homepage = 'https://github.com/driskell/log-courier' @@ -16,5 +16,5 @@ Gem::Specification.new do |gem| gem.metadata = { 'logstash_plugin' => 'true', 'group' => 'input' } gem.add_runtime_dependency 'logstash', '~> 1.4' - gem.add_runtime_dependency 'log-courier', '= 1.3' + gem.add_runtime_dependency 'log-courier', '= ' end diff --git a/logstash-output-log-courier.gemspec b/logstash-output-log-courier.gemspec.tmpl similarity index 88% rename from logstash-output-log-courier.gemspec rename to logstash-output-log-courier.gemspec.tmpl index 3b68ee77..c8f6c5b2 100644 --- a/logstash-output-log-courier.gemspec +++ b/logstash-output-log-courier.gemspec.tmpl @@ -1,6 +1,6 @@ Gem::Specification.new do |gem| gem.name = 'logstash-output-log-courier' - gem.version = '1.3' + gem.version = '' gem.description = 'Log Courier Output Logstash Plugin' gem.summary = 'Transmit events from one Logstash instance to another using the Log Courier protocol' gem.homepage = 'https://github.com/driskell/log-courier' @@ -16,5 +16,5 @@ Gem::Specification.new do |gem| gem.metadata = { 'logstash_plugin' => 'true', 'group' => 'input' } gem.add_runtime_dependency 'logstash', '~> 1.4' - gem.add_runtime_dependency 'log-courier', '= 1.3' + gem.add_runtime_dependency 'log-courier', '= ' end diff --git a/src/lc-lib/core/version.go b/src/lc-lib/core/version.go deleted file mode 100644 index 5fea5ae1..00000000 --- a/src/lc-lib/core/version.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -* Copyright 2014 Jason Woods. -* -* Licensed under the Apache License, Version 2.0 (the "License"); -* you may not use this file except in compliance with the License. -* You may obtain a copy of the License at -* -* http://www.apache.org/licenses/LICENSE-2.0 -* -* Unless required by applicable law or agreed to in writing, software -* distributed under the License is distributed on an "AS IS" BASIS, -* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -* See the License for the specific language governing permissions and -* limitations under the License. -*/ - -package core - -const Log_Courier_Version string = "1.3" diff --git a/src/lc-lib/core/version.go.template b/src/lc-lib/core/version.go.tmpl similarity index 100% rename from src/lc-lib/core/version.go.template rename to src/lc-lib/core/version.go.tmpl From f6a8f18157217b841a776084d296032bbea7a9b4 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:45:10 +0000 Subject: [PATCH 13/75] Update change log --- docs/ChangeLog.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index c5688bbb..0088ff13 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -43,6 +43,7 @@ failures will still round robin. configuration. (Thanks @mhughes - #88) * A configuration reload will now reopen log files. (#91) * Implement support for SRV record server entries (#85) +* Fix Log Courier output plugin (#96) ***Security*** @@ -53,6 +54,8 @@ courier plugins to further enhance security when using the TLS transport. *2nd January 2014* +***Changes*** + * Added support for Go 1.4 * Added new "host" option to override the "host" field in generated events (elasticsearch/logstash-forwarder#260) @@ -73,6 +76,11 @@ events and add regression test correctly * Various other minor tweaks and fixes +***Known Issues*** + +* The Logstash courier output plugin triggers a NameError. This issue is fixed +in the following version. No workaround is available. + ## 1.2 *1st December 2014* From 8ee8cc71ea6429431bb5a83707b0da95b2de4822 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 21:55:02 +0000 Subject: [PATCH 14/75] Ensure that non-git downloads of Log Courier can be built, by providing version_short.txt which increments with tagged releases --- build/fix_version | 19 ++++++++++++++----- build/push_gems | 2 +- version_short.txt | 1 + 3 files changed, 16 insertions(+), 6 deletions(-) create mode 100644 version_short.txt diff --git a/build/fix_version b/build/fix_version index 231b360e..7b27d0bd 100755 --- a/build/fix_version +++ b/build/fix_version @@ -1,17 +1,20 @@ #!/bin/bash -# If this is not a git repository, use the existing version -if [ ! -d '.git' ]; then - exit -fi - if [ -n "$1" ]; then + # When specified on the command line, it's always short, and means we're preparing a release VERSION="$1" + VERSION_SHORT="$VERSION" +elif [ ! -d '.git' ]; then + # Not a git repository, so use the existing version_short.txt + VERSION="$(cat version_short.txt)" + VERSION_SHORT="$VERSION" else # Describe version from Git, and ensure the only "-xxx" is the git revision # This ensures that gem builds only add one ".pre" tag automatically VERSION="$(git describe | sed 's/-\([0-9][0-9]*\)-\([0-9a-z][0-9a-z]*\)$/.\1.\2/g')" VERSION="${VERSION#v}" + VERSION_SHORT=$(git describe --abbrev=0) + VERSION_SHORT="${VERSION_SHORT#v}" fi # Patch version.go @@ -22,5 +25,11 @@ for GEM in log-courier logstash-input-log-courier logstash-output-log-courier; d sed "s//${VERSION}/g" ${GEM}.gemspec.tmpl > ${GEM}.gemspec done +# Store the full version in version.txt for other scripts to use, such as push_gems echo "${VERSION}" > version.txt + +# Store the nearest tag in version_short.txt - this is the only file stored in the repo +# This file is used as the version if we download Log Courier as a non-git package +echo "${VERSION_SHORT}" > version_short.txt + echo "Set Log Courier Version ${VERSION}" diff --git a/build/push_gems b/build/push_gems index 5f199bea..365aa577 100755 --- a/build/push_gems +++ b/build/push_gems @@ -1,6 +1,6 @@ #!/bin/bash -for GEM in *-$(cat version.txt).gem; do +for GEM in *-$(cat version_short.txt).gem; do echo "- ${GEM}" gem push $GEM done diff --git a/version_short.txt b/version_short.txt new file mode 100644 index 00000000..7e32cd56 --- /dev/null +++ b/version_short.txt @@ -0,0 +1 @@ +1.3 From 7b0960f93c11661a197325b73475221e24c84027 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 22:35:10 +0000 Subject: [PATCH 15/75] Improve build system to allow github.com import paths in the packages --- Makefile | 7 +++++-- build/setup_root | 5 +++++ src/lc-admin/lc-admin.go | 4 ++-- src/lc-lib/admin/client.go | 2 +- src/lc-lib/admin/listener.go | 2 +- src/lc-lib/admin/responses.go | 2 +- src/lc-lib/codecs/filter.go | 2 +- src/lc-lib/codecs/filter_test.go | 2 +- src/lc-lib/codecs/multiline.go | 2 +- src/lc-lib/codecs/multiline_test.go | 2 +- src/lc-lib/codecs/plain.go | 2 +- src/lc-lib/harvester/harvester.go | 2 +- src/lc-lib/prospector/info.go | 6 +++--- src/lc-lib/prospector/prospector.go | 8 ++++---- src/lc-lib/publisher/pending_payload.go | 2 +- src/lc-lib/publisher/pending_payload_test.go | 2 +- src/lc-lib/publisher/publisher.go | 4 ++-- src/lc-lib/registrar/event_ack.go | 2 +- src/lc-lib/registrar/event_deleted.go | 2 +- src/lc-lib/registrar/event_discover.go | 2 +- src/lc-lib/registrar/event_renamed.go | 2 +- src/lc-lib/registrar/eventspool.go | 2 +- src/lc-lib/registrar/registrar.go | 2 +- src/lc-lib/spooler/spooler.go | 4 ++-- src/lc-lib/transports/tcp.go | 2 +- src/lc-lib/transports/zmq.go | 2 +- src/lc-lib/transports/zmq4.go | 2 +- src/log-courier/log-courier.go | 18 +++++++++--------- 28 files changed, 52 insertions(+), 44 deletions(-) create mode 100755 build/setup_root diff --git a/Makefile b/Makefile index b931e32f..2af416b3 100644 --- a/Makefile +++ b/Makefile @@ -1,4 +1,4 @@ -.PHONY: prepare fix_version all log-courier gem gem_plugins push_gems test test_go test_rspec doc profile benchmark jrprofile jrbenchmark clean +.PHONY: prepare fix_version setup_root all log-courier gem gem_plugins push_gems test test_go test_rspec doc profile benchmark jrprofile jrbenchmark clean MAKEFILE := $(word $(words $(MAKEFILE_LIST)),$(MAKEFILE_LIST)) GOPATH := $(patsubst %/,%,$(dir $(abspath $(MAKEFILE)))) @@ -102,7 +102,10 @@ endif fix_version: build/fix_version -prepare: | fix_version +setup_root: + build/setup_root + +prepare: | fix_version setup_root @go version >/dev/null || (echo "Go not found. You need to install Go version 1.2-1.4: http://golang.org/doc/install"; false) @go version | grep -q 'go version go1.[234]' || (echo "Go version 1.2-1.4, you have a version of Go that is not supported."; false) @echo "GOPATH: $${GOPATH}" diff --git a/build/setup_root b/build/setup_root new file mode 100755 index 00000000..79e6bedd --- /dev/null +++ b/build/setup_root @@ -0,0 +1,5 @@ +#!/bin/bash + +# Allow the source code to refer to github.com/driskell/log-courier paths +mkdir -p src/github.com/driskell +ln -nsf ../../.. src/github.com/driskell/log-courier diff --git a/src/lc-admin/lc-admin.go b/src/lc-admin/lc-admin.go index d0353099..bd493f84 100644 --- a/src/lc-admin/lc-admin.go +++ b/src/lc-admin/lc-admin.go @@ -20,8 +20,8 @@ import ( "bufio" "flag" "fmt" - "lc-lib/admin" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/admin" + "github.com/driskell/log-courier/src/lc-lib/core" "os" "os/signal" "strings" diff --git a/src/lc-lib/admin/client.go b/src/lc-lib/admin/client.go index 0109be0f..27173e79 100644 --- a/src/lc-lib/admin/client.go +++ b/src/lc-lib/admin/client.go @@ -19,7 +19,7 @@ package admin import ( "encoding/gob" "fmt" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "net" "strings" "time" diff --git a/src/lc-lib/admin/listener.go b/src/lc-lib/admin/listener.go index 0de9146c..6a2fcd2b 100644 --- a/src/lc-lib/admin/listener.go +++ b/src/lc-lib/admin/listener.go @@ -18,7 +18,7 @@ package admin import ( "fmt" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "net" "strings" "time" diff --git a/src/lc-lib/admin/responses.go b/src/lc-lib/admin/responses.go index 5545b5f6..55d03a1f 100644 --- a/src/lc-lib/admin/responses.go +++ b/src/lc-lib/admin/responses.go @@ -18,7 +18,7 @@ package admin import ( "encoding/gob" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "time" ) diff --git a/src/lc-lib/codecs/filter.go b/src/lc-lib/codecs/filter.go index 47116141..157e57c0 100644 --- a/src/lc-lib/codecs/filter.go +++ b/src/lc-lib/codecs/filter.go @@ -19,7 +19,7 @@ package codecs import ( "errors" "fmt" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "regexp" ) diff --git a/src/lc-lib/codecs/filter_test.go b/src/lc-lib/codecs/filter_test.go index 4d34624a..20964e0e 100644 --- a/src/lc-lib/codecs/filter_test.go +++ b/src/lc-lib/codecs/filter_test.go @@ -1,7 +1,7 @@ package codecs import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "testing" ) diff --git a/src/lc-lib/codecs/multiline.go b/src/lc-lib/codecs/multiline.go index 88bc5595..e0f06e09 100644 --- a/src/lc-lib/codecs/multiline.go +++ b/src/lc-lib/codecs/multiline.go @@ -19,7 +19,7 @@ package codecs import ( "errors" "fmt" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "regexp" "strings" "sync" diff --git a/src/lc-lib/codecs/multiline_test.go b/src/lc-lib/codecs/multiline_test.go index 7c8069a5..820259cf 100644 --- a/src/lc-lib/codecs/multiline_test.go +++ b/src/lc-lib/codecs/multiline_test.go @@ -1,7 +1,7 @@ package codecs import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "sync" "testing" "time" diff --git a/src/lc-lib/codecs/plain.go b/src/lc-lib/codecs/plain.go index 865e70e8..0a5de2a1 100644 --- a/src/lc-lib/codecs/plain.go +++ b/src/lc-lib/codecs/plain.go @@ -17,7 +17,7 @@ package codecs import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type CodecPlainFactory struct { diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index 6767f147..669b8773 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -22,7 +22,7 @@ package harvester import ( "fmt" "io" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "os" "sync" "time" diff --git a/src/lc-lib/prospector/info.go b/src/lc-lib/prospector/info.go index 63477551..bbcec315 100644 --- a/src/lc-lib/prospector/info.go +++ b/src/lc-lib/prospector/info.go @@ -17,9 +17,9 @@ package prospector import ( - "lc-lib/core" - "lc-lib/harvester" - "lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/registrar" "os" ) diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index 280507c2..ad7f7339 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -21,10 +21,10 @@ package prospector import ( "fmt" - "lc-lib/core" - "lc-lib/harvester" - "lc-lib/registrar" - "lc-lib/spooler" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/spooler" "os" "path/filepath" "time" diff --git a/src/lc-lib/publisher/pending_payload.go b/src/lc-lib/publisher/pending_payload.go index bc545cfa..170bb2db 100644 --- a/src/lc-lib/publisher/pending_payload.go +++ b/src/lc-lib/publisher/pending_payload.go @@ -21,7 +21,7 @@ import ( "compress/zlib" "encoding/binary" "errors" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "time" ) diff --git a/src/lc-lib/publisher/pending_payload_test.go b/src/lc-lib/publisher/pending_payload_test.go index f7ddba37..1ab0edb2 100644 --- a/src/lc-lib/publisher/pending_payload_test.go +++ b/src/lc-lib/publisher/pending_payload_test.go @@ -17,7 +17,7 @@ package publisher import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "time" "testing" ) diff --git a/src/lc-lib/publisher/publisher.go b/src/lc-lib/publisher/publisher.go index e5b3c17b..5c42ebad 100644 --- a/src/lc-lib/publisher/publisher.go +++ b/src/lc-lib/publisher/publisher.go @@ -24,8 +24,8 @@ import ( "encoding/binary" "errors" "fmt" - "lc-lib/core" - "lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/registrar" "math/rand" "sync" "time" diff --git a/src/lc-lib/registrar/event_ack.go b/src/lc-lib/registrar/event_ack.go index 11d2169f..331f68c9 100644 --- a/src/lc-lib/registrar/event_ack.go +++ b/src/lc-lib/registrar/event_ack.go @@ -17,7 +17,7 @@ package registrar import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type AckEvent struct { diff --git a/src/lc-lib/registrar/event_deleted.go b/src/lc-lib/registrar/event_deleted.go index a713fce1..28f9af52 100644 --- a/src/lc-lib/registrar/event_deleted.go +++ b/src/lc-lib/registrar/event_deleted.go @@ -17,7 +17,7 @@ package registrar import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type DeletedEvent struct { diff --git a/src/lc-lib/registrar/event_discover.go b/src/lc-lib/registrar/event_discover.go index c0ff8b0c..213bf262 100644 --- a/src/lc-lib/registrar/event_discover.go +++ b/src/lc-lib/registrar/event_discover.go @@ -17,7 +17,7 @@ package registrar import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "os" ) diff --git a/src/lc-lib/registrar/event_renamed.go b/src/lc-lib/registrar/event_renamed.go index 0ab29327..eb408dc2 100644 --- a/src/lc-lib/registrar/event_renamed.go +++ b/src/lc-lib/registrar/event_renamed.go @@ -17,7 +17,7 @@ package registrar import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type RenamedEvent struct { diff --git a/src/lc-lib/registrar/eventspool.go b/src/lc-lib/registrar/eventspool.go index 0579c4c2..0be0f594 100644 --- a/src/lc-lib/registrar/eventspool.go +++ b/src/lc-lib/registrar/eventspool.go @@ -17,7 +17,7 @@ package registrar import ( - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type RegistrarEvent interface { diff --git a/src/lc-lib/registrar/registrar.go b/src/lc-lib/registrar/registrar.go index 6dd21537..b190e261 100644 --- a/src/lc-lib/registrar/registrar.go +++ b/src/lc-lib/registrar/registrar.go @@ -22,7 +22,7 @@ package registrar import ( "encoding/json" "fmt" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "os" "sync" ) diff --git a/src/lc-lib/spooler/spooler.go b/src/lc-lib/spooler/spooler.go index 709520d4..a34a7052 100644 --- a/src/lc-lib/spooler/spooler.go +++ b/src/lc-lib/spooler/spooler.go @@ -20,8 +20,8 @@ package spooler import ( - "lc-lib/core" - "lc-lib/publisher" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/publisher" "time" ) diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index 29a3a41a..0074fe50 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -27,7 +27,7 @@ import ( "encoding/pem" "fmt" "io/ioutil" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "math/rand" "net" "regexp" diff --git a/src/lc-lib/transports/zmq.go b/src/lc-lib/transports/zmq.go index f3d071fb..6d423d94 100644 --- a/src/lc-lib/transports/zmq.go +++ b/src/lc-lib/transports/zmq.go @@ -24,7 +24,7 @@ import ( "errors" "fmt" zmq "github.com/alecthomas/gozmq" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "net" "regexp" "runtime" diff --git a/src/lc-lib/transports/zmq4.go b/src/lc-lib/transports/zmq4.go index 4b783292..79e9b54f 100644 --- a/src/lc-lib/transports/zmq4.go +++ b/src/lc-lib/transports/zmq4.go @@ -29,7 +29,7 @@ import ( "encoding/binary" "fmt" zmq "github.com/alecthomas/gozmq" - "lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" "syscall" "unsafe" ) diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index 92fc5520..c8f12a62 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -23,21 +23,21 @@ import ( "flag" "fmt" "github.com/op/go-logging" - "lc-lib/admin" - "lc-lib/core" - "lc-lib/harvester" - "lc-lib/prospector" - "lc-lib/spooler" - "lc-lib/publisher" - "lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/admin" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/prospector" + "github.com/driskell/log-courier/src/lc-lib/spooler" + "github.com/driskell/log-courier/src/lc-lib/publisher" + "github.com/driskell/log-courier/src/lc-lib/registrar" stdlog "log" "os" "runtime/pprof" "time" ) -import _ "lc-lib/codecs" -import _ "lc-lib/transports" +import _ "github.com/driskell/log-courier/src/lc-lib/codecs" +import _ "github.com/driskell/log-courier/src/lc-lib/transports" func main() { logcourier := NewLogCourier() From 8ee3fa44c9d00571dd83dea20b2271622cb6e0e2 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 22:37:17 +0000 Subject: [PATCH 16/75] Run gofmt - Indentation should always be a tab, as per common sense and gofmt defaults --- src/lc-admin/lc-admin.go | 662 ++++----- src/lc-curvekey/lc-curvekey.go | 152 +-- src/lc-lib/admin/client.go | 184 +-- src/lc-lib/admin/listener.go | 234 ++-- src/lc-lib/admin/logging.go | 2 +- src/lc-lib/admin/responses.go | 42 +- src/lc-lib/admin/server.go | 202 +-- src/lc-lib/admin/transport.go | 4 +- src/lc-lib/admin/transport_tcp.go | 2 +- src/lc-lib/admin/transport_unix.go | 2 +- src/lc-lib/codecs/filter.go | 114 +- src/lc-lib/codecs/filter_test.go | 158 +-- src/lc-lib/codecs/multiline.go | 378 +++--- src/lc-lib/codecs/multiline_test.go | 388 +++--- src/lc-lib/codecs/plain.go | 32 +- src/lc-lib/core/codec.go | 22 +- src/lc-lib/core/config.go | 940 ++++++------- src/lc-lib/core/event.go | 10 +- src/lc-lib/core/logging.go | 2 +- src/lc-lib/core/pipeline.go | 120 +- src/lc-lib/core/snapshot.go | 72 +- src/lc-lib/core/stream.go | 2 +- src/lc-lib/core/transport.go | 32 +- src/lc-lib/core/util.go | 34 +- src/lc-lib/harvester/harvester.go | 696 +++++----- src/lc-lib/harvester/harvester_other.go | 4 +- src/lc-lib/harvester/harvester_windows.go | 32 +- src/lc-lib/harvester/linereader.go | 156 +-- src/lc-lib/harvester/linereader_test.go | 159 ++- src/lc-lib/harvester/logging.go | 2 +- src/lc-lib/prospector/errors.go | 8 +- src/lc-lib/prospector/info.go | 134 +- src/lc-lib/prospector/logging.go | 4 +- src/lc-lib/prospector/prospector.go | 840 ++++++------ src/lc-lib/prospector/snapshot.go | 2 +- src/lc-lib/publisher/logging.go | 2 +- src/lc-lib/publisher/pending_payload.go | 166 +-- src/lc-lib/publisher/pending_payload_test.go | 4 +- src/lc-lib/publisher/publisher.go | 1118 ++++++++-------- src/lc-lib/registrar/event_ack.go | 36 +- src/lc-lib/registrar/event_deleted.go | 26 +- src/lc-lib/registrar/event_discover.go | 38 +- src/lc-lib/registrar/event_renamed.go | 30 +- src/lc-lib/registrar/eventspool.go | 32 +- src/lc-lib/registrar/filestate.go | 34 +- src/lc-lib/registrar/filestateos_darwin.go | 20 +- src/lc-lib/registrar/filestateos_freebsd.go | 20 +- src/lc-lib/registrar/filestateos_linux.go | 20 +- src/lc-lib/registrar/filestateos_openbsd.go | 20 +- src/lc-lib/registrar/filestateos_windows.go | 72 +- src/lc-lib/registrar/logging.go | 2 +- src/lc-lib/registrar/registrar.go | 230 ++-- src/lc-lib/registrar/registrar_other.go | 26 +- src/lc-lib/registrar/registrar_windows.go | 42 +- src/lc-lib/spooler/logging.go | 2 +- src/lc-lib/spooler/spooler.go | 238 ++-- src/lc-lib/transports/logging.go | 2 +- src/lc-lib/transports/tcp.go | 676 +++++----- src/lc-lib/transports/tcp_wrap.go | 66 +- src/lc-lib/transports/zmq.go | 1264 +++++++++--------- src/lc-lib/transports/zmq3.go | 110 +- src/lc-lib/transports/zmq4.go | 212 +-- src/lc-tlscert/lc-tlscert.go | 324 ++--- src/log-courier/log-courier.go | 487 ++++--- src/log-courier/log-courier_nonwindows.go | 60 +- src/log-courier/log-courier_windows.go | 14 +- src/log-courier/logging.go | 76 +- 67 files changed, 5647 insertions(+), 5649 deletions(-) diff --git a/src/lc-admin/lc-admin.go b/src/lc-admin/lc-admin.go index bd493f84..fbce8ebc 100644 --- a/src/lc-admin/lc-admin.go +++ b/src/lc-admin/lc-admin.go @@ -12,401 +12,401 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package main import ( - "bufio" - "flag" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/admin" - "github.com/driskell/log-courier/src/lc-lib/core" - "os" - "os/signal" - "strings" - "text/scanner" - "time" + "bufio" + "flag" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/admin" + "github.com/driskell/log-courier/src/lc-lib/core" + "os" + "os/signal" + "strings" + "text/scanner" + "time" ) type CommandError struct { - message string + message string } func (c *CommandError) Error() string { - return c.message + return c.message } var CommandEOF *CommandError = &CommandError{"EOF"} var CommandTooManyArgs *CommandError = &CommandError{"Too many arguments"} type Admin struct { - client *admin.Client - connected bool - quiet bool - admin_connect string - scanner scanner.Scanner - scanner_err error + client *admin.Client + connected bool + quiet bool + admin_connect string + scanner scanner.Scanner + scanner_err error } func NewAdmin(quiet bool, admin_connect string) *Admin { - return &Admin{ - quiet: quiet, - admin_connect: admin_connect, - } + return &Admin{ + quiet: quiet, + admin_connect: admin_connect, + } } func (a *Admin) connect() error { - if !a.connected { - var err error + if !a.connected { + var err error - if !a.quiet { - fmt.Printf("Attempting connection to %s...\n", a.admin_connect) - } + if !a.quiet { + fmt.Printf("Attempting connection to %s...\n", a.admin_connect) + } - if a.client, err = admin.NewClient(a.admin_connect); err != nil { - fmt.Printf("Failed to connect: %s\n", err) - return err - } + if a.client, err = admin.NewClient(a.admin_connect); err != nil { + fmt.Printf("Failed to connect: %s\n", err) + return err + } - if !a.quiet { - fmt.Printf("Connected\n\n") - } + if !a.quiet { + fmt.Printf("Connected\n\n") + } - a.connected = true - } + a.connected = true + } - return nil + return nil } func (a *Admin) ProcessCommand(command string) bool { - var reconnected bool - - for { - if !a.connected { - if err := a.connect(); err != nil { - return false - } - - reconnected = true - } - - var err error - - a.initScanner(command) - if command, err = a.scanIdent(); err != nil { - goto Error - } - - switch command { - case "reload": - if !a.scanEOF() { - err = CommandTooManyArgs - break - } - - err = a.client.Reload() - if err != nil { - break - } - - fmt.Printf("Configuration reload successful\n") - case "status": - var format string - format, err = a.scanIdent() - if err != nil && err != CommandEOF { - break - } - - if !a.scanEOF() { - err = CommandTooManyArgs - break - } - - var snaps *core.Snapshot - snaps, err = a.client.FetchSnapshot() - if err != nil { - break - } - - a.renderSnap(format, snaps) - case "help": - if !a.scanEOF() { - err = CommandTooManyArgs - break - } - - PrintHelp() - default: - err = &CommandError{fmt.Sprintf("Unknown command: %s", command)} - } - - if err == nil { - return true - } - - Error: - if _, ok := err.(*CommandError); ok { - fmt.Printf("Parse error: %s\n", err) - return false - } else if _, ok := err.(*admin.ErrorResponse); ok { - fmt.Printf("Log Courier returned an error: %s\n", err) - return false - } else { - a.connected = false - fmt.Printf("Connection error: %s\n", err) - } - - if reconnected { - break - } - } - - return false + var reconnected bool + + for { + if !a.connected { + if err := a.connect(); err != nil { + return false + } + + reconnected = true + } + + var err error + + a.initScanner(command) + if command, err = a.scanIdent(); err != nil { + goto Error + } + + switch command { + case "reload": + if !a.scanEOF() { + err = CommandTooManyArgs + break + } + + err = a.client.Reload() + if err != nil { + break + } + + fmt.Printf("Configuration reload successful\n") + case "status": + var format string + format, err = a.scanIdent() + if err != nil && err != CommandEOF { + break + } + + if !a.scanEOF() { + err = CommandTooManyArgs + break + } + + var snaps *core.Snapshot + snaps, err = a.client.FetchSnapshot() + if err != nil { + break + } + + a.renderSnap(format, snaps) + case "help": + if !a.scanEOF() { + err = CommandTooManyArgs + break + } + + PrintHelp() + default: + err = &CommandError{fmt.Sprintf("Unknown command: %s", command)} + } + + if err == nil { + return true + } + + Error: + if _, ok := err.(*CommandError); ok { + fmt.Printf("Parse error: %s\n", err) + return false + } else if _, ok := err.(*admin.ErrorResponse); ok { + fmt.Printf("Log Courier returned an error: %s\n", err) + return false + } else { + a.connected = false + fmt.Printf("Connection error: %s\n", err) + } + + if reconnected { + break + } + } + + return false } func (a *Admin) initScanner(command string) { - a.scanner.Init(strings.NewReader(command)) - a.scanner.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanStrings - a.scanner.Whitespace = 1<<' ' + a.scanner.Init(strings.NewReader(command)) + a.scanner.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanStrings + a.scanner.Whitespace = 1 << ' ' - a.scanner.Error = func(s *scanner.Scanner, msg string) { - a.scanner_err = &CommandError{msg} - } + a.scanner.Error = func(s *scanner.Scanner, msg string) { + a.scanner_err = &CommandError{msg} + } } func (a *Admin) scanIdent() (string, error) { - r := a.scanner.Scan() - if a.scanner_err != nil { - return "", a.scanner_err - } - switch r { - case scanner.Ident: - return a.scanner.TokenText(), nil - case scanner.EOF: - return "", CommandEOF - } - return "", &CommandError{"Invalid token"} + r := a.scanner.Scan() + if a.scanner_err != nil { + return "", a.scanner_err + } + switch r { + case scanner.Ident: + return a.scanner.TokenText(), nil + case scanner.EOF: + return "", CommandEOF + } + return "", &CommandError{"Invalid token"} } func (a *Admin) scanEOF() bool { - r := a.scanner.Scan() - if a.scanner_err == nil && r == scanner.EOF { - return true - } - return false + r := a.scanner.Scan() + if a.scanner_err == nil && r == scanner.EOF { + return true + } + return false } func (a *Admin) renderSnap(format string, snap *core.Snapshot) { - switch format { - case "json": - fmt.Printf("{\n") - a.renderSnapJSON("\t", snap) - fmt.Printf("}\n") - default: - a.renderSnapYAML("", snap) - } + switch format { + case "json": + fmt.Printf("{\n") + a.renderSnapJSON("\t", snap) + fmt.Printf("}\n") + default: + a.renderSnapYAML("", snap) + } } func (a *Admin) renderSnapJSON(indent string, snap *core.Snapshot) { - if snap.NumEntries() != 0 { - for i, j := 0, snap.NumEntries(); i < j; i = i+1 { - k, v := snap.Entry(i) - switch t := v.(type) { - case string: - fmt.Printf(indent + "%q: %q", k, t) - case int8, int16, int32, int64, uint8, uint16, uint32, uint64: - fmt.Printf(indent + "%q: %d", k, t) - case float32, float64: - fmt.Printf(indent + "%q: %.2f", k, t) - case time.Time: - fmt.Printf(indent + "%q: %q", k, t.Format("_2 Jan 2006 15.04.05")) - case time.Duration: - fmt.Printf(indent + "%q: %q", k, (t-(t%time.Second)).String()) - default: - fmt.Printf(indent + "%q: %q", k, fmt.Sprintf("%v", t)) - } - if i + 1 < j || snap.NumSubs() != 0 { - fmt.Printf(",\n") - } else { - fmt.Printf("\n") - } - } - } - if snap.NumSubs() != 0 { - for i, j := 0, snap.NumSubs(); i < j; i = i+1 { - sub_snap := snap.Sub(i) - fmt.Printf(indent + "%q: {\n", sub_snap.Description()) - a.renderSnapJSON(indent + "\t", sub_snap) - if i + 1 < j { - fmt.Printf(indent + "},\n") - } else { - fmt.Printf(indent + "}\n") - } - } - } + if snap.NumEntries() != 0 { + for i, j := 0, snap.NumEntries(); i < j; i = i + 1 { + k, v := snap.Entry(i) + switch t := v.(type) { + case string: + fmt.Printf(indent+"%q: %q", k, t) + case int8, int16, int32, int64, uint8, uint16, uint32, uint64: + fmt.Printf(indent+"%q: %d", k, t) + case float32, float64: + fmt.Printf(indent+"%q: %.2f", k, t) + case time.Time: + fmt.Printf(indent+"%q: %q", k, t.Format("_2 Jan 2006 15.04.05")) + case time.Duration: + fmt.Printf(indent+"%q: %q", k, (t - (t % time.Second)).String()) + default: + fmt.Printf(indent+"%q: %q", k, fmt.Sprintf("%v", t)) + } + if i+1 < j || snap.NumSubs() != 0 { + fmt.Printf(",\n") + } else { + fmt.Printf("\n") + } + } + } + if snap.NumSubs() != 0 { + for i, j := 0, snap.NumSubs(); i < j; i = i + 1 { + sub_snap := snap.Sub(i) + fmt.Printf(indent+"%q: {\n", sub_snap.Description()) + a.renderSnapJSON(indent+"\t", sub_snap) + if i+1 < j { + fmt.Printf(indent + "},\n") + } else { + fmt.Printf(indent + "}\n") + } + } + } } func (a *Admin) renderSnapYAML(indent string, snap *core.Snapshot) { - if snap.NumEntries() != 0 { - for i, j := 0, snap.NumEntries(); i < j; i = i+1 { - k, v := snap.Entry(i) - switch t := v.(type) { - case string: - fmt.Printf(indent + "%s: %s\n", k, t) - case int, int8, int16, int32, int64, uint8, uint16, uint32, uint64: - fmt.Printf(indent + "%s: %d\n", k, t) - case float32, float64: - fmt.Printf(indent + "%s: %.2f\n", k, t) - case time.Time: - fmt.Printf(indent + "%s: %s\n", k, t.Format("_2 Jan 2006 15.04.05")) - case time.Duration: - fmt.Printf(indent + "%s: %s\n", k, (t-(t%time.Second)).String()) - default: - fmt.Printf(indent + "%s: %v\n", k, t) - } - } - } - if snap.NumSubs() != 0 { - for i, j := 0, snap.NumSubs(); i < j; i = i+1 { - sub_snap := snap.Sub(i) - fmt.Printf(indent + "%s:\n", sub_snap.Description()) - a.renderSnapYAML(indent + " ", sub_snap) - } - } + if snap.NumEntries() != 0 { + for i, j := 0, snap.NumEntries(); i < j; i = i + 1 { + k, v := snap.Entry(i) + switch t := v.(type) { + case string: + fmt.Printf(indent+"%s: %s\n", k, t) + case int, int8, int16, int32, int64, uint8, uint16, uint32, uint64: + fmt.Printf(indent+"%s: %d\n", k, t) + case float32, float64: + fmt.Printf(indent+"%s: %.2f\n", k, t) + case time.Time: + fmt.Printf(indent+"%s: %s\n", k, t.Format("_2 Jan 2006 15.04.05")) + case time.Duration: + fmt.Printf(indent+"%s: %s\n", k, (t - (t % time.Second)).String()) + default: + fmt.Printf(indent+"%s: %v\n", k, t) + } + } + } + if snap.NumSubs() != 0 { + for i, j := 0, snap.NumSubs(); i < j; i = i + 1 { + sub_snap := snap.Sub(i) + fmt.Printf(indent+"%s:\n", sub_snap.Description()) + a.renderSnapYAML(indent+" ", sub_snap) + } + } } func (a *Admin) Run() { - signal_chan := make(chan os.Signal, 1) - signal.Notify(signal_chan, os.Interrupt) - - command_chan := make(chan string) - go func() { - var discard bool - reader := bufio.NewReader(os.Stdin) - for { - line, prefix, err := reader.ReadLine() - if err != nil { - break - } else if prefix { - discard = true - } else if discard { - fmt.Printf("Line too long!\n") - discard = false - } else { - command_chan <- string(line) - } - } - }() + signal_chan := make(chan os.Signal, 1) + signal.Notify(signal_chan, os.Interrupt) + + command_chan := make(chan string) + go func() { + var discard bool + reader := bufio.NewReader(os.Stdin) + for { + line, prefix, err := reader.ReadLine() + if err != nil { + break + } else if prefix { + discard = true + } else if discard { + fmt.Printf("Line too long!\n") + discard = false + } else { + command_chan <- string(line) + } + } + }() CommandLoop: - for { - fmt.Printf("> ") - select { - case command := <-command_chan: - if command == "exit" { - break CommandLoop - } - a.ProcessCommand(command) - case <-signal_chan: - fmt.Printf("\n> exit\n") - break CommandLoop - } - } + for { + fmt.Printf("> ") + select { + case command := <-command_chan: + if command == "exit" { + break CommandLoop + } + a.ProcessCommand(command) + case <-signal_chan: + fmt.Printf("\n> exit\n") + break CommandLoop + } + } } func (a *Admin) argsCommand(args []string, watch bool) bool { - var signal_chan chan os.Signal + var signal_chan chan os.Signal - if watch { - signal_chan = make(chan os.Signal, 1) - signal.Notify(signal_chan, os.Interrupt) - } + if watch { + signal_chan = make(chan os.Signal, 1) + signal.Notify(signal_chan, os.Interrupt) + } WatchLoop: - for { - if !a.ProcessCommand(strings.Join(args, " ")) { - if !watch { - return false - } - } - - if !watch { - break - } - - // Gap between repeats - fmt.Printf("\n") - - select { - case <-signal_chan: - break WatchLoop - case <-time.After(time.Second): - } - } - - return true + for { + if !a.ProcessCommand(strings.Join(args, " ")) { + if !watch { + return false + } + } + + if !watch { + break + } + + // Gap between repeats + fmt.Printf("\n") + + select { + case <-signal_chan: + break WatchLoop + case <-time.After(time.Second): + } + } + + return true } func PrintHelp() { - fmt.Printf("Available commands:\n") - fmt.Printf(" reload Reload configuration\n") - fmt.Printf(" status Display the current shipping status\n") - fmt.Printf(" exit Exit\n") + fmt.Printf("Available commands:\n") + fmt.Printf(" reload Reload configuration\n") + fmt.Printf(" status Display the current shipping status\n") + fmt.Printf(" exit Exit\n") } func main() { - var version bool - var quiet bool - var watch bool - var admin_connect string - - flag.BoolVar(&version, "version", false, "display the Log Courier client version") - flag.BoolVar(&quiet, "quiet", false, "quietly execute the command line argument and output only the result") - flag.BoolVar(&watch, "watch", false, "repeat the command specified on the command line every second") - flag.StringVar(&admin_connect, "connect", "tcp:127.0.0.1:1234", "the Log Courier instance to connect to (default tcp:127.0.0.1:1234)") - - flag.Parse() - - if version { - fmt.Printf("Log Courier version %s\n", core.Log_Courier_Version) - os.Exit(0) - } - - if !quiet { - fmt.Printf("Log Courier version %s client\n\n", core.Log_Courier_Version) - } - - args := flag.Args() - - if len(args) != 0 { - // Don't require a connection to display the help message - if args[0] == "help" { - PrintHelp() - os.Exit(0) - } - - admin := NewAdmin(quiet, admin_connect) - if admin.argsCommand(args, watch) { - os.Exit(0) - } - os.Exit(1) - } - - if quiet { - fmt.Printf("No command specified on the command line for quiet execution\n") - os.Exit(1) - } - - if watch { - fmt.Printf("No command specified on the command line to watch\n") - os.Exit(1) - } - - admin := NewAdmin(quiet, admin_connect) - if err := admin.connect(); err != nil { - return - } - - admin.Run() + var version bool + var quiet bool + var watch bool + var admin_connect string + + flag.BoolVar(&version, "version", false, "display the Log Courier client version") + flag.BoolVar(&quiet, "quiet", false, "quietly execute the command line argument and output only the result") + flag.BoolVar(&watch, "watch", false, "repeat the command specified on the command line every second") + flag.StringVar(&admin_connect, "connect", "tcp:127.0.0.1:1234", "the Log Courier instance to connect to (default tcp:127.0.0.1:1234)") + + flag.Parse() + + if version { + fmt.Printf("Log Courier version %s\n", core.Log_Courier_Version) + os.Exit(0) + } + + if !quiet { + fmt.Printf("Log Courier version %s client\n\n", core.Log_Courier_Version) + } + + args := flag.Args() + + if len(args) != 0 { + // Don't require a connection to display the help message + if args[0] == "help" { + PrintHelp() + os.Exit(0) + } + + admin := NewAdmin(quiet, admin_connect) + if admin.argsCommand(args, watch) { + os.Exit(0) + } + os.Exit(1) + } + + if quiet { + fmt.Printf("No command specified on the command line for quiet execution\n") + os.Exit(1) + } + + if watch { + fmt.Printf("No command specified on the command line to watch\n") + os.Exit(1) + } + + admin := NewAdmin(quiet, admin_connect) + if err := admin.connect(); err != nil { + return + } + + admin.Run() } diff --git a/src/lc-curvekey/lc-curvekey.go b/src/lc-curvekey/lc-curvekey.go index 2d09c349..71339f22 100644 --- a/src/lc-curvekey/lc-curvekey.go +++ b/src/lc-curvekey/lc-curvekey.go @@ -17,10 +17,10 @@ package main import ( - "flag" - "fmt" - zmq "github.com/alecthomas/gozmq" - "syscall" + "flag" + "fmt" + zmq "github.com/alecthomas/gozmq" + "syscall" ) /* @@ -31,70 +31,70 @@ import ( import "C" func main() { - var single bool - - flag.BoolVar(&single, "single", false, "generate a single keypair") - flag.Parse() - - if single { - fmt.Println("Generating single keypair...") - - pub, priv, err := CurveKeyPair() - if err != nil { - fmt.Println("An error occurred:", err) - if err == syscall.ENOTSUP { - fmt.Print("Please ensure that your zeromq installation was built with libsodium support") - } - return - } - - fmt.Println("Public key: ", pub) - fmt.Println("Private key: ", priv) - return - } - - fmt.Println("Generating configuration keys...") - fmt.Println("(Use 'genkey --single' to generate a single keypair.)") - fmt.Println("") - - server_pub, server_priv, err := CurveKeyPair() - if err != nil { - fmt.Println("An error occurred:", err) - if err == syscall.ENOTSUP { - fmt.Print("Please ensure that your zeromq installation was built with libsodium support") - } - return - } - - client_pub, client_priv, err := CurveKeyPair() - if err != nil { - fmt.Println("An error occurred:", err) - if err == syscall.ENOTSUP { - fmt.Println("Please ensure that your zeromq installation was built with libsodium support") - } - return - } - - fmt.Println("Copy and paste the following into your Log Courier configuration:") - fmt.Printf(" \"curve server key\": \"%s\",\n", server_pub) - fmt.Printf(" \"curve public key\": \"%s\",\n", client_pub) - fmt.Printf(" \"curve secret key\": \"%s\",\n", client_priv) - fmt.Println("") - fmt.Println("Copy and paste the following into your LogStash configuration:") - fmt.Printf(" curve_secret_key => \"%s\",\n", server_priv) + var single bool + + flag.BoolVar(&single, "single", false, "generate a single keypair") + flag.Parse() + + if single { + fmt.Println("Generating single keypair...") + + pub, priv, err := CurveKeyPair() + if err != nil { + fmt.Println("An error occurred:", err) + if err == syscall.ENOTSUP { + fmt.Print("Please ensure that your zeromq installation was built with libsodium support") + } + return + } + + fmt.Println("Public key: ", pub) + fmt.Println("Private key: ", priv) + return + } + + fmt.Println("Generating configuration keys...") + fmt.Println("(Use 'genkey --single' to generate a single keypair.)") + fmt.Println("") + + server_pub, server_priv, err := CurveKeyPair() + if err != nil { + fmt.Println("An error occurred:", err) + if err == syscall.ENOTSUP { + fmt.Print("Please ensure that your zeromq installation was built with libsodium support") + } + return + } + + client_pub, client_priv, err := CurveKeyPair() + if err != nil { + fmt.Println("An error occurred:", err) + if err == syscall.ENOTSUP { + fmt.Println("Please ensure that your zeromq installation was built with libsodium support") + } + return + } + + fmt.Println("Copy and paste the following into your Log Courier configuration:") + fmt.Printf(" \"curve server key\": \"%s\",\n", server_pub) + fmt.Printf(" \"curve public key\": \"%s\",\n", client_pub) + fmt.Printf(" \"curve secret key\": \"%s\",\n", client_priv) + fmt.Println("") + fmt.Println("Copy and paste the following into your LogStash configuration:") + fmt.Printf(" curve_secret_key => \"%s\",\n", server_priv) } // Because gozmq does not yet expose this for us, we have to expose it ourselves func CurveKeyPair() (string, string, error) { - var pub [41]C.char - var priv [41]C.char + var pub [41]C.char + var priv [41]C.char - // Because gozmq does not yet expose this for us, we have to expose it ourselves - if rc, err := C.zmq_curve_keypair(&pub[0], &priv[0]); rc != 0 { - return "", "", casterr(err) - } + // Because gozmq does not yet expose this for us, we have to expose it ourselves + if rc, err := C.zmq_curve_keypair(&pub[0], &priv[0]); rc != 0 { + return "", "", casterr(err) + } - return C.GoString(&pub[0]), C.GoString(&priv[0]), nil + return C.GoString(&pub[0]), C.GoString(&priv[0]), nil } // The following is copy-pasted from gozmq's zmq.go @@ -102,21 +102,21 @@ func CurveKeyPair() (string, string, error) { type zmqErrno syscall.Errno func (e zmqErrno) Error() string { - return C.GoString(C.zmq_strerror(C.int(e))) + return C.GoString(C.zmq_strerror(C.int(e))) } func casterr(fromcgo error) error { - errno, ok := fromcgo.(syscall.Errno) - if !ok { - return fromcgo - } - zmqerrno := zmqErrno(errno) - switch zmqerrno { - case zmq.ENOTSOCK: - return zmqerrno - } - if zmqerrno >= C.ZMQ_HAUSNUMERO { - return zmqerrno - } - return errno + errno, ok := fromcgo.(syscall.Errno) + if !ok { + return fromcgo + } + zmqerrno := zmqErrno(errno) + switch zmqerrno { + case zmq.ENOTSOCK: + return zmqerrno + } + if zmqerrno >= C.ZMQ_HAUSNUMERO { + return zmqerrno + } + return errno } diff --git a/src/lc-lib/admin/client.go b/src/lc-lib/admin/client.go index 27173e79..7ff172fe 100644 --- a/src/lc-lib/admin/client.go +++ b/src/lc-lib/admin/client.go @@ -12,142 +12,142 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin import ( - "encoding/gob" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "net" - "strings" - "time" + "encoding/gob" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "net" + "strings" + "time" ) type Client struct { - admin_connect string - conn net.Conn - decoder *gob.Decoder + admin_connect string + conn net.Conn + decoder *gob.Decoder } func NewClient(admin_connect string) (*Client, error) { - var err error + var err error - ret := &Client{} + ret := &Client{} - // TODO: handle the connection in a goroutine that can PING - // on idle, and implement a close member to shut it - // it down. For now we'll rely on the auto-reconnect - if ret.conn, err = ret.connect(admin_connect); err != nil { - return nil, err - } + // TODO: handle the connection in a goroutine that can PING + // on idle, and implement a close member to shut it + // it down. For now we'll rely on the auto-reconnect + if ret.conn, err = ret.connect(admin_connect); err != nil { + return nil, err + } - ret.decoder = gob.NewDecoder(ret.conn) + ret.decoder = gob.NewDecoder(ret.conn) - return ret, nil + return ret, nil } func (c *Client) connect(admin_connect string) (net.Conn, error) { - connect := strings.SplitN(admin_connect, ":", 2) - if len(connect) == 1 { - connect = append(connect, connect[0]) - connect[0] = "tcp" - } + connect := strings.SplitN(admin_connect, ":", 2) + if len(connect) == 1 { + connect = append(connect, connect[0]) + connect[0] = "tcp" + } - if connector, ok := registeredConnectors[connect[0]]; ok { - return connector(connect[0], connect[1]) - } + if connector, ok := registeredConnectors[connect[0]]; ok { + return connector(connect[0], connect[1]) + } - return nil, fmt.Errorf("Unknown transport specified in connection address: '%s'", connect[0]) + return nil, fmt.Errorf("Unknown transport specified in connection address: '%s'", connect[0]) } func (c *Client) request(command string) (*Response, error) { - if err := c.conn.SetWriteDeadline(time.Now().Add(5 * time.Second)); err != nil { - return nil, err - } + if err := c.conn.SetWriteDeadline(time.Now().Add(5 * time.Second)); err != nil { + return nil, err + } - total_written := 0 + total_written := 0 - for { - wrote, err := c.conn.Write([]byte(command[total_written:4])) - if err != nil { - return nil, err - } + for { + wrote, err := c.conn.Write([]byte(command[total_written:4])) + if err != nil { + return nil, err + } - total_written += wrote - if total_written == 4 { - break - } - } + total_written += wrote + if total_written == 4 { + break + } + } - var response Response + var response Response - if err := c.conn.SetReadDeadline(time.Now().Add(5 * time.Second)); err != nil { - return nil, err - } + if err := c.conn.SetReadDeadline(time.Now().Add(5 * time.Second)); err != nil { + return nil, err + } - if err := c.decoder.Decode(&response); err != nil { - return nil, err - } + if err := c.decoder.Decode(&response); err != nil { + return nil, err + } - return &response, nil + return &response, nil } func (c *Client) resolveError(response *Response) error { - ret, ok := response.Response.(*ErrorResponse) - if ok { - return ret - } + ret, ok := response.Response.(*ErrorResponse) + if ok { + return ret + } - return &ErrorResponse{Message: fmt.Sprintf("Unrecognised response: %v\n", ret)} + return &ErrorResponse{Message: fmt.Sprintf("Unrecognised response: %v\n", ret)} } func (c *Client) Ping() error { - response, err := c.request("PING") - if err != nil { - return err - } + response, err := c.request("PING") + if err != nil { + return err + } - if _, ok := response.Response.(*PongResponse); ok { - return nil - } + if _, ok := response.Response.(*PongResponse); ok { + return nil + } - return c.resolveError(response) + return c.resolveError(response) } func (c *Client) Reload() error { - response, err := c.request("RELD") - if err != nil { - return err - } + response, err := c.request("RELD") + if err != nil { + return err + } - if _, ok := response.Response.(*ReloadResponse); ok { - return nil - } + if _, ok := response.Response.(*ReloadResponse); ok { + return nil + } - return c.resolveError(response) + return c.resolveError(response) } func (c *Client) FetchSnapshot() (*core.Snapshot, error) { - response, err := c.request("SNAP") - if err != nil { - return nil, err - } - - if ret, ok := response.Response.(*core.Snapshot); ok { - return ret, nil - } - - // Backwards compatibility - if ret, ok := response.Response.([]*core.Snapshot); ok { - snap := core.NewSnapshot("Log Courier") - for _, sub := range ret { - snap.AddSub(sub) - } - snap.Sort() - return snap, nil - } - - return nil, c.resolveError(response) + response, err := c.request("SNAP") + if err != nil { + return nil, err + } + + if ret, ok := response.Response.(*core.Snapshot); ok { + return ret, nil + } + + // Backwards compatibility + if ret, ok := response.Response.([]*core.Snapshot); ok { + snap := core.NewSnapshot("Log Courier") + for _, sub := range ret { + snap.AddSub(sub) + } + snap.Sort() + return snap, nil + } + + return nil, c.resolveError(response) } diff --git a/src/lc-lib/admin/listener.go b/src/lc-lib/admin/listener.go index 6a2fcd2b..94ed0887 100644 --- a/src/lc-lib/admin/listener.go +++ b/src/lc-lib/admin/listener.go @@ -12,154 +12,154 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin import ( - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "net" - "strings" - "time" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "net" + "strings" + "time" ) type Listener struct { - core.PipelineSegment - core.PipelineConfigReceiver - - config *core.GeneralConfig - command_chan chan string - response_chan chan *Response - listener NetListener - client_shutdown chan interface{} - client_started chan interface{} - client_ended chan interface{} + core.PipelineSegment + core.PipelineConfigReceiver + + config *core.GeneralConfig + command_chan chan string + response_chan chan *Response + listener NetListener + client_shutdown chan interface{} + client_started chan interface{} + client_ended chan interface{} } func NewListener(pipeline *core.Pipeline, config *core.GeneralConfig) (*Listener, error) { - var err error + var err error - ret := &Listener{ - config: config, - command_chan: make(chan string), - response_chan: make(chan *Response), - client_shutdown: make(chan interface{}), - // TODO: Make this limit configurable - client_started: make(chan interface{}, 50), - client_ended: make(chan interface{}, 50), - } + ret := &Listener{ + config: config, + command_chan: make(chan string), + response_chan: make(chan *Response), + client_shutdown: make(chan interface{}), + // TODO: Make this limit configurable + client_started: make(chan interface{}, 50), + client_ended: make(chan interface{}, 50), + } - if ret.listener, err = ret.listen(config); err != nil { - return nil, err - } + if ret.listener, err = ret.listen(config); err != nil { + return nil, err + } - pipeline.Register(ret) + pipeline.Register(ret) - return ret, nil + return ret, nil } func (l *Listener) listen(config *core.GeneralConfig) (NetListener, error) { - bind := strings.SplitN(config.AdminBind, ":", 2) - if len(bind) == 1 { - bind = append(bind, bind[0]) - bind[0] = "tcp" - } + bind := strings.SplitN(config.AdminBind, ":", 2) + if len(bind) == 1 { + bind = append(bind, bind[0]) + bind[0] = "tcp" + } - if listener, ok := registeredListeners[bind[0]]; ok { - return listener(bind[0], bind[1]) - } + if listener, ok := registeredListeners[bind[0]]; ok { + return listener(bind[0], bind[1]) + } - return nil, fmt.Errorf("Unknown transport specified for admin bind: '%s'", bind[0]) + return nil, fmt.Errorf("Unknown transport specified for admin bind: '%s'", bind[0]) } func (l *Listener) OnCommand() <-chan string { - return l.command_chan + return l.command_chan } func (l *Listener) Respond(response *Response) { - l.response_chan <- response + l.response_chan <- response } func (l *Listener) Run() { - defer func(){ - l.Done() - }() + defer func() { + l.Done() + }() ListenerLoop: - for { - select { - case <-l.OnShutdown(): - break ListenerLoop - case config := <-l.OnConfig(): - // We can't yet disable admin during a reload - if config.General.AdminEnabled { - if config.General.AdminBind != l.config.AdminBind { - new_listener, err := l.listen(&config.General) - if err != nil { - log.Error("The new admin configuration failed to apply: %s", err) - continue - } - - l.listener.Close() - l.listener = new_listener - l.config = &config.General - } - } - default: - } - - l.listener.SetDeadline(time.Now().Add(time.Second)) - - conn, err := l.listener.Accept() - if err != nil { - if net_err, ok := err.(*net.OpError); ok && net_err.Timeout() { - continue - } - log.Warning("Failed to accept admin connection: %s", err) - } - - log.Debug("New admin connection from %s", conn.RemoteAddr()) - - l.startServer(conn) - } - - // Shutdown listener - l.listener.Close() - - // Trigger shutdowns - close(l.client_shutdown) - - // Wait for shutdowns - for { - if len(l.client_started) == 0 { - break - } - - select { - case <-l.client_ended: - <-l.client_started - default: - } - } + for { + select { + case <-l.OnShutdown(): + break ListenerLoop + case config := <-l.OnConfig(): + // We can't yet disable admin during a reload + if config.General.AdminEnabled { + if config.General.AdminBind != l.config.AdminBind { + new_listener, err := l.listen(&config.General) + if err != nil { + log.Error("The new admin configuration failed to apply: %s", err) + continue + } + + l.listener.Close() + l.listener = new_listener + l.config = &config.General + } + } + default: + } + + l.listener.SetDeadline(time.Now().Add(time.Second)) + + conn, err := l.listener.Accept() + if err != nil { + if net_err, ok := err.(*net.OpError); ok && net_err.Timeout() { + continue + } + log.Warning("Failed to accept admin connection: %s", err) + } + + log.Debug("New admin connection from %s", conn.RemoteAddr()) + + l.startServer(conn) + } + + // Shutdown listener + l.listener.Close() + + // Trigger shutdowns + close(l.client_shutdown) + + // Wait for shutdowns + for { + if len(l.client_started) == 0 { + break + } + + select { + case <-l.client_ended: + <-l.client_started + default: + } + } } func (l *Listener) startServer(conn net.Conn) { - server := newServer(l, conn) - - select { - case <-l.client_ended: - <-l.client_started - default: - } - - select { - case l.client_started <- 1: - default: - // TODO: Make this limit configurable - log.Warning("Refused admin connection: Admin connection limit (50) reached") - return - } - - go server.Run() + server := newServer(l, conn) + + select { + case <-l.client_ended: + <-l.client_started + default: + } + + select { + case l.client_started <- 1: + default: + // TODO: Make this limit configurable + log.Warning("Refused admin connection: Admin connection limit (50) reached") + return + } + + go server.Run() } diff --git a/src/lc-lib/admin/logging.go b/src/lc-lib/admin/logging.go index efc0a01b..c0806433 100644 --- a/src/lc-lib/admin/logging.go +++ b/src/lc-lib/admin/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("admin") + log = logging.MustGetLogger("admin") } diff --git a/src/lc-lib/admin/responses.go b/src/lc-lib/admin/responses.go index 55d03a1f..3e1c78e0 100644 --- a/src/lc-lib/admin/responses.go +++ b/src/lc-lib/admin/responses.go @@ -12,18 +12,18 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin import ( - "encoding/gob" - "github.com/driskell/log-courier/src/lc-lib/core" - "time" + "encoding/gob" + "github.com/driskell/log-courier/src/lc-lib/core" + "time" ) type Response struct { - Response interface{} + Response interface{} } type PongResponse struct { @@ -33,30 +33,30 @@ type ReloadResponse struct { } type ErrorResponse struct { - Message string + Message string } func (e *ErrorResponse) Error() string { - return e.Message + return e.Message } func init() { - // Response structure - gob.Register(&Response{}) + // Response structure + gob.Register(&Response{}) - // General error - gob.Register(&ErrorResponse{}) + // General error + gob.Register(&ErrorResponse{}) - // PONG - gob.Register(&PongResponse{}) + // PONG + gob.Register(&PongResponse{}) - // RELD - gob.Register(&ReloadResponse{}) + // RELD + gob.Register(&ReloadResponse{}) - // SNAP - gob.Register(&core.Snapshot{}) - // SNAP - time.Time - gob.Register(time.Now()) - // SNAP - time.Duration - gob.Register(time.Since(time.Now())) + // SNAP + gob.Register(&core.Snapshot{}) + // SNAP - time.Time + gob.Register(time.Now()) + // SNAP - time.Duration + gob.Register(time.Since(time.Now())) } diff --git a/src/lc-lib/admin/server.go b/src/lc-lib/admin/server.go index 7c7e0e14..dbb25b47 100644 --- a/src/lc-lib/admin/server.go +++ b/src/lc-lib/admin/server.go @@ -12,136 +12,136 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin import ( - "encoding/gob" - "fmt" - "io" - "net" - "time" + "encoding/gob" + "fmt" + "io" + "net" + "time" ) type server struct { - listener *Listener - conn net.Conn + listener *Listener + conn net.Conn - encoder *gob.Encoder + encoder *gob.Encoder } func newServer(listener *Listener, conn net.Conn) *server { - return &server{ - listener: listener, - conn: conn, - } + return &server{ + listener: listener, + conn: conn, + } } func (s *server) Run() { - if err := s.loop(); err != nil { - log.Warning("Error on admin connection from %s: %s", s.conn.RemoteAddr(), err) - } else { - log.Debug("Admin connection from %s closed", s.conn.RemoteAddr()) - } + if err := s.loop(); err != nil { + log.Warning("Error on admin connection from %s: %s", s.conn.RemoteAddr(), err) + } else { + log.Debug("Admin connection from %s closed", s.conn.RemoteAddr()) + } - if conn, ok := s.conn.(*net.TCPConn); ok { - // TODO: Make linger time configurable? - conn.SetLinger(5) - } + if conn, ok := s.conn.(*net.TCPConn); ok { + // TODO: Make linger time configurable? + conn.SetLinger(5) + } - s.conn.Close() + s.conn.Close() - s.listener.client_ended <- 1 + s.listener.client_ended <- 1 } func (s *server) loop() (err error) { - var result *Response -// TODO : Obey shutdown request on s.listener.client_shutdown channel close - s.encoder = gob.NewEncoder(s.conn) - - command := make([]byte, 4) - - for { - if err = s.readCommand(command); err != nil { - if err == io.EOF { - err = nil - } - return - } - - log.Debug("Command from %s: %s", s.conn.RemoteAddr(), command) - - if string(command) == "PING" { - result = &Response{&PongResponse{}} - } else { - result = s.processCommand(string(command)) - } - - if err = s.sendResponse(result); err != nil { - return - } - } + var result *Response + // TODO : Obey shutdown request on s.listener.client_shutdown channel close + s.encoder = gob.NewEncoder(s.conn) + + command := make([]byte, 4) + + for { + if err = s.readCommand(command); err != nil { + if err == io.EOF { + err = nil + } + return + } + + log.Debug("Command from %s: %s", s.conn.RemoteAddr(), command) + + if string(command) == "PING" { + result = &Response{&PongResponse{}} + } else { + result = s.processCommand(string(command)) + } + + if err = s.sendResponse(result); err != nil { + return + } + } } func (s *server) readCommand(command []byte) error { - total_read := 0 - start_time := time.Now() - - for { - // Poll every second for shutdown - if err := s.conn.SetReadDeadline(time.Now().Add(time.Second)); err != nil { - return err - } - - read, err := s.conn.Read(command[total_read:4]) - if err != nil { - if op_err, ok := err.(*net.OpError); ok && op_err.Timeout() { - // TODO: Make idle timeout configurable - if time.Now().Sub(start_time) <= 1800 * time.Second { - // Check shutdown at each interval - select { - case <-s.listener.client_shutdown: - return io.EOF - default: - } - - continue - } - } else if total_read != 0 && op_err == io.EOF { - return fmt.Errorf("EOF") - } - return err - } - - total_read += read - if total_read == 4 { - break - } - } - - return nil + total_read := 0 + start_time := time.Now() + + for { + // Poll every second for shutdown + if err := s.conn.SetReadDeadline(time.Now().Add(time.Second)); err != nil { + return err + } + + read, err := s.conn.Read(command[total_read:4]) + if err != nil { + if op_err, ok := err.(*net.OpError); ok && op_err.Timeout() { + // TODO: Make idle timeout configurable + if time.Now().Sub(start_time) <= 1800*time.Second { + // Check shutdown at each interval + select { + case <-s.listener.client_shutdown: + return io.EOF + default: + } + + continue + } + } else if total_read != 0 && op_err == io.EOF { + return fmt.Errorf("EOF") + } + return err + } + + total_read += read + if total_read == 4 { + break + } + } + + return nil } func (s *server) sendResponse(response *Response) error { - if err := s.conn.SetWriteDeadline(time.Now().Add(5 * time.Second)); err != nil { - return err - } + if err := s.conn.SetWriteDeadline(time.Now().Add(5 * time.Second)); err != nil { + return err + } - if err := s.encoder.Encode(response); err != nil { - return err - } + if err := s.encoder.Encode(response); err != nil { + return err + } - return nil + return nil } func (s *server) processCommand(command string) *Response { - select { - case s.listener.command_chan <- command: - // Listener immediately stops processing commands on shutdown, so catch it here - case <-s.listener.client_shutdown: - return &Response{&ErrorResponse{Message: "Log Courier is shutting down"}} - } - - return <-s.listener.response_chan + select { + case s.listener.command_chan <- command: + // Listener immediately stops processing commands on shutdown, so catch it here + case <-s.listener.client_shutdown: + return &Response{&ErrorResponse{Message: "Log Courier is shutting down"}} + } + + return <-s.listener.response_chan } diff --git a/src/lc-lib/admin/transport.go b/src/lc-lib/admin/transport.go index 483ffad7..3632aa82 100644 --- a/src/lc-lib/admin/transport.go +++ b/src/lc-lib/admin/transport.go @@ -12,7 +12,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin @@ -33,7 +33,7 @@ type listenerFunc func(string, string) (NetListener, error) var ( registeredConnectors map[string]connectorFunc = make(map[string]connectorFunc) - registeredListeners map[string]listenerFunc = make(map[string]listenerFunc) + registeredListeners map[string]listenerFunc = make(map[string]listenerFunc) ) func registerTransport(name string, connector connectorFunc, listener listenerFunc) { diff --git a/src/lc-lib/admin/transport_tcp.go b/src/lc-lib/admin/transport_tcp.go index 44ab237c..fb96e75b 100644 --- a/src/lc-lib/admin/transport_tcp.go +++ b/src/lc-lib/admin/transport_tcp.go @@ -12,7 +12,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin diff --git a/src/lc-lib/admin/transport_unix.go b/src/lc-lib/admin/transport_unix.go index 02c6d6e9..f577e4aa 100644 --- a/src/lc-lib/admin/transport_unix.go +++ b/src/lc-lib/admin/transport_unix.go @@ -14,7 +14,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package admin diff --git a/src/lc-lib/codecs/filter.go b/src/lc-lib/codecs/filter.go index 157e57c0..c7b6f624 100644 --- a/src/lc-lib/codecs/filter.go +++ b/src/lc-lib/codecs/filter.go @@ -17,90 +17,90 @@ package codecs import ( - "errors" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "regexp" + "errors" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "regexp" ) type CodecFilterFactory struct { - Patterns []string `config:"patterns"` - Negate bool `config:"negate"` + Patterns []string `config:"patterns"` + Negate bool `config:"negate"` - matchers []*regexp.Regexp + matchers []*regexp.Regexp } type CodecFilter struct { - config *CodecFilterFactory - last_offset int64 - filtered_lines uint64 - callback_func core.CodecCallbackFunc - meter_filtered uint64 + config *CodecFilterFactory + last_offset int64 + filtered_lines uint64 + callback_func core.CodecCallbackFunc + meter_filtered uint64 } func NewFilterCodecFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.CodecFactory, error) { - var err error - - result := &CodecFilterFactory{} - if err = config.PopulateConfig(result, config_path, unused); err != nil { - return nil, err - } - - if len(result.Patterns) == 0 { - return nil, errors.New("Filter codec pattern must be specified.") - } - - result.matchers = make([]*regexp.Regexp, len(result.Patterns)) - for k, pattern := range result.Patterns { - result.matchers[k], err = regexp.Compile(pattern) - if err != nil { - return nil, fmt.Errorf("Failed to compile filter codec pattern, '%s'.", err) - } - } - - return result, nil + var err error + + result := &CodecFilterFactory{} + if err = config.PopulateConfig(result, config_path, unused); err != nil { + return nil, err + } + + if len(result.Patterns) == 0 { + return nil, errors.New("Filter codec pattern must be specified.") + } + + result.matchers = make([]*regexp.Regexp, len(result.Patterns)) + for k, pattern := range result.Patterns { + result.matchers[k], err = regexp.Compile(pattern) + if err != nil { + return nil, fmt.Errorf("Failed to compile filter codec pattern, '%s'.", err) + } + } + + return result, nil } func (f *CodecFilterFactory) NewCodec(callback_func core.CodecCallbackFunc, offset int64) core.Codec { - return &CodecFilter{ - config: f, - last_offset: offset, - callback_func: callback_func, - } + return &CodecFilter{ + config: f, + last_offset: offset, + callback_func: callback_func, + } } func (c *CodecFilter) Teardown() int64 { - return c.last_offset + return c.last_offset } func (c *CodecFilter) Event(start_offset int64, end_offset int64, text string) { - // Only flush the event if it matches a filter - var match bool - for _, matcher := range c.config.matchers { - if matcher.MatchString(text) { - match = true - break - } - } - - if c.config.Negate != match { - c.callback_func(start_offset, end_offset, text) - } else { - c.filtered_lines++ - } + // Only flush the event if it matches a filter + var match bool + for _, matcher := range c.config.matchers { + if matcher.MatchString(text) { + match = true + break + } + } + + if c.config.Negate != match { + c.callback_func(start_offset, end_offset, text) + } else { + c.filtered_lines++ + } } func (c *CodecFilter) Meter() { - c.meter_filtered = c.filtered_lines + c.meter_filtered = c.filtered_lines } func (c *CodecFilter) Snapshot() *core.Snapshot { - snap := core.NewSnapshot("Filter Codec") - snap.AddEntry("Filtered lines", c.meter_filtered) - return snap + snap := core.NewSnapshot("Filter Codec") + snap.AddEntry("Filtered lines", c.meter_filtered) + return snap } // Register the codec func init() { - core.RegisterCodec("filter", NewFilterCodecFactory) + core.RegisterCodec("filter", NewFilterCodecFactory) } diff --git a/src/lc-lib/codecs/filter_test.go b/src/lc-lib/codecs/filter_test.go index 20964e0e..2b450227 100644 --- a/src/lc-lib/codecs/filter_test.go +++ b/src/lc-lib/codecs/filter_test.go @@ -1,102 +1,102 @@ package codecs import ( - "github.com/driskell/log-courier/src/lc-lib/core" - "testing" + "github.com/driskell/log-courier/src/lc-lib/core" + "testing" ) var filter_lines []string func createFilterCodec(unused map[string]interface{}, callback core.CodecCallbackFunc, t *testing.T) core.Codec { - config := core.NewConfig() + config := core.NewConfig() - factory, err := NewFilterCodecFactory(config, "", unused, "filter") - if err != nil { - t.Logf("Failed to create filter codec: %s", err) - t.FailNow() - } + factory, err := NewFilterCodecFactory(config, "", unused, "filter") + if err != nil { + t.Logf("Failed to create filter codec: %s", err) + t.FailNow() + } - return factory.NewCodec(callback, 0) + return factory.NewCodec(callback, 0) } func checkFilter(start_offset int64, end_offset int64, text string) { - filter_lines = append(filter_lines, text) + filter_lines = append(filter_lines, text) } func TestFilter(t *testing.T) { - filter_lines = make([]string, 0, 1) - - codec := createFilterCodec(map[string]interface{}{ - "patterns": []string{"^NEXT line$"}, - "negate": false, - }, checkFilter, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if len(filter_lines) != 1 { - t.Logf("Wrong line count received") - t.FailNow() - } else if filter_lines[0] != "NEXT line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() - } + filter_lines = make([]string, 0, 1) + + codec := createFilterCodec(map[string]interface{}{ + "patterns": []string{"^NEXT line$"}, + "negate": false, + }, checkFilter, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if len(filter_lines) != 1 { + t.Logf("Wrong line count received") + t.FailNow() + } else if filter_lines[0] != "NEXT line" { + t.Logf("Wrong line[0] received: %s", filter_lines[0]) + t.FailNow() + } } func TestFilterNegate(t *testing.T) { - filter_lines = make([]string, 0, 1) - - codec := createFilterCodec(map[string]interface{}{ - "patterns": []string{"^NEXT line$"}, - "negate": true, - }, checkFilter, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if len(filter_lines) != 3 { - t.Logf("Wrong line count received") - t.FailNow() - } else if filter_lines[0] != "DEBUG First line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() - } else if filter_lines[1] != "ANOTHER line" { - t.Logf("Wrong line[1] received: %s", filter_lines[1]) - t.FailNow() - } else if filter_lines[2] != "DEBUG Next line" { - t.Logf("Wrong line[2] received: %s", filter_lines[2]) - t.FailNow() - } + filter_lines = make([]string, 0, 1) + + codec := createFilterCodec(map[string]interface{}{ + "patterns": []string{"^NEXT line$"}, + "negate": true, + }, checkFilter, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if len(filter_lines) != 3 { + t.Logf("Wrong line count received") + t.FailNow() + } else if filter_lines[0] != "DEBUG First line" { + t.Logf("Wrong line[0] received: %s", filter_lines[0]) + t.FailNow() + } else if filter_lines[1] != "ANOTHER line" { + t.Logf("Wrong line[1] received: %s", filter_lines[1]) + t.FailNow() + } else if filter_lines[2] != "DEBUG Next line" { + t.Logf("Wrong line[2] received: %s", filter_lines[2]) + t.FailNow() + } } func TestFilterMultiple(t *testing.T) { - filter_lines = make([]string, 0, 1) - - codec := createFilterCodec(map[string]interface{}{ - "patterns": []string{"^NEXT line$", "^DEBUG First line$"}, - "negate": false, - }, checkFilter, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if len(filter_lines) != 2 { - t.Logf("Wrong line count received") - t.FailNow() - } else if filter_lines[0] != "DEBUG First line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() - } else if filter_lines[1] != "NEXT line" { - t.Logf("Wrong line[1] received: %s", filter_lines[1]) - t.FailNow() - } + filter_lines = make([]string, 0, 1) + + codec := createFilterCodec(map[string]interface{}{ + "patterns": []string{"^NEXT line$", "^DEBUG First line$"}, + "negate": false, + }, checkFilter, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if len(filter_lines) != 2 { + t.Logf("Wrong line count received") + t.FailNow() + } else if filter_lines[0] != "DEBUG First line" { + t.Logf("Wrong line[0] received: %s", filter_lines[0]) + t.FailNow() + } else if filter_lines[1] != "NEXT line" { + t.Logf("Wrong line[1] received: %s", filter_lines[1]) + t.FailNow() + } } diff --git a/src/lc-lib/codecs/multiline.go b/src/lc-lib/codecs/multiline.go index e0f06e09..1a6ec8e4 100644 --- a/src/lc-lib/codecs/multiline.go +++ b/src/lc-lib/codecs/multiline.go @@ -17,236 +17,236 @@ package codecs import ( - "errors" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "regexp" - "strings" - "sync" - "time" + "errors" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "regexp" + "strings" + "sync" + "time" ) const ( - codecMultiline_What_Previous = 0x00000001 - codecMultiline_What_Next = 0x00000002 + codecMultiline_What_Previous = 0x00000001 + codecMultiline_What_Next = 0x00000002 ) type CodecMultilineFactory struct { - Pattern string `config:"pattern"` - What string `config:"what"` - Negate bool `config:"negate"` - PreviousTimeout time.Duration `config:"previous timeout"` - MaxMultilineBytes int64 `config:"max multiline bytes"` - - matcher *regexp.Regexp - what int + Pattern string `config:"pattern"` + What string `config:"what"` + Negate bool `config:"negate"` + PreviousTimeout time.Duration `config:"previous timeout"` + MaxMultilineBytes int64 `config:"max multiline bytes"` + + matcher *regexp.Regexp + what int } type CodecMultiline struct { - config *CodecMultilineFactory - last_offset int64 - callback_func core.CodecCallbackFunc - - end_offset int64 - start_offset int64 - buffer []string - buffer_lines int64 - buffer_len int64 - timer_lock sync.Mutex - timer_stop chan interface{} - timer_wait sync.WaitGroup - timer_deadline time.Time - - meter_lines int64 - meter_bytes int64 + config *CodecMultilineFactory + last_offset int64 + callback_func core.CodecCallbackFunc + + end_offset int64 + start_offset int64 + buffer []string + buffer_lines int64 + buffer_len int64 + timer_lock sync.Mutex + timer_stop chan interface{} + timer_wait sync.WaitGroup + timer_deadline time.Time + + meter_lines int64 + meter_bytes int64 } func NewMultilineCodecFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.CodecFactory, error) { - var err error - - result := &CodecMultilineFactory{} - if err = config.PopulateConfig(result, config_path, unused); err != nil { - return nil, err - } - - if result.Pattern == "" { - return nil, errors.New("Multiline codec pattern must be specified.") - } - - result.matcher, err = regexp.Compile(result.Pattern) - if err != nil { - return nil, fmt.Errorf("Failed to compile multiline codec pattern, '%s'.", err) - } - - if result.What == "" || result.What == "previous" { - result.what = codecMultiline_What_Previous - } else if result.What == "next" { - result.what = codecMultiline_What_Next - } - - if result.MaxMultilineBytes == 0 { - result.MaxMultilineBytes = config.General.SpoolMaxBytes - } - - // We conciously allow a line 4 bytes longer what we would normally have as the limit - // This 4 bytes is the event header size. It's not worth considering though - if result.MaxMultilineBytes > config.General.SpoolMaxBytes { - return nil, fmt.Errorf("max multiline bytes cannot be greater than /general/spool max bytes") - } - - return result, nil + var err error + + result := &CodecMultilineFactory{} + if err = config.PopulateConfig(result, config_path, unused); err != nil { + return nil, err + } + + if result.Pattern == "" { + return nil, errors.New("Multiline codec pattern must be specified.") + } + + result.matcher, err = regexp.Compile(result.Pattern) + if err != nil { + return nil, fmt.Errorf("Failed to compile multiline codec pattern, '%s'.", err) + } + + if result.What == "" || result.What == "previous" { + result.what = codecMultiline_What_Previous + } else if result.What == "next" { + result.what = codecMultiline_What_Next + } + + if result.MaxMultilineBytes == 0 { + result.MaxMultilineBytes = config.General.SpoolMaxBytes + } + + // We conciously allow a line 4 bytes longer what we would normally have as the limit + // This 4 bytes is the event header size. It's not worth considering though + if result.MaxMultilineBytes > config.General.SpoolMaxBytes { + return nil, fmt.Errorf("max multiline bytes cannot be greater than /general/spool max bytes") + } + + return result, nil } func (f *CodecMultilineFactory) NewCodec(callback_func core.CodecCallbackFunc, offset int64) core.Codec { - c := &CodecMultiline{ - config: f, - end_offset: offset, - last_offset: offset, - callback_func: callback_func, - } - - // Start the "previous timeout" routine that will auto flush at deadline - if f.PreviousTimeout != 0 { - c.timer_stop = make(chan interface{}) - c.timer_wait.Add(1) - - c.timer_deadline = time.Now().Add(f.PreviousTimeout) - - go c.deadlineRoutine() - } - return c + c := &CodecMultiline{ + config: f, + end_offset: offset, + last_offset: offset, + callback_func: callback_func, + } + + // Start the "previous timeout" routine that will auto flush at deadline + if f.PreviousTimeout != 0 { + c.timer_stop = make(chan interface{}) + c.timer_wait.Add(1) + + c.timer_deadline = time.Now().Add(f.PreviousTimeout) + + go c.deadlineRoutine() + } + return c } func (c *CodecMultiline) Teardown() int64 { - if c.config.PreviousTimeout != 0 { - close(c.timer_stop) - c.timer_wait.Wait() - } + if c.config.PreviousTimeout != 0 { + close(c.timer_stop) + c.timer_wait.Wait() + } - return c.last_offset + return c.last_offset } func (c *CodecMultiline) Event(start_offset int64, end_offset int64, text string) { - // TODO(driskell): If we are using previous and we match on the very first line read, - // then this is because we've started in the middle of a multiline event (the first line - // should never match) - so we could potentially offer an option to discard this. - // The benefit would be that when using previous_timeout, we could discard any extraneous - // event data that did not get written in time, if the user so wants it, in order to prevent - // odd incomplete data. It would be a signal from the user, "I will worry about the buffering - // issues my programs may have - you just make sure to write each event either completely or - // partially, always with the FIRST line correct (which could be the important one)." - match_failed := c.config.Negate == c.config.matcher.MatchString(text) - if c.config.what == codecMultiline_What_Previous { - if c.config.PreviousTimeout != 0 { - // Prevent a flush happening while we're modifying the stored data - c.timer_lock.Lock() - } - if match_failed { - c.flush() - } - } - - var text_len int64 = int64(len(text)) - - // Check we don't exceed the max multiline bytes - if check_len := c.buffer_len + text_len + c.buffer_lines; check_len > c.config.MaxMultilineBytes { - // Store partial and flush - overflow := check_len - c.config.MaxMultilineBytes - cut := text_len - overflow - c.end_offset = end_offset - overflow - - c.buffer = append(c.buffer, text[:cut]) - c.buffer_lines++ - c.buffer_len += cut - - c.flush() - - // Append the remaining data to the buffer - start_offset += cut - text = text[cut:] - } - - if len(c.buffer) == 0 { - c.start_offset = start_offset - } - c.end_offset = end_offset - - c.buffer = append(c.buffer, text) - c.buffer_lines++ - c.buffer_len += text_len - - if c.config.what == codecMultiline_What_Previous { - if c.config.PreviousTimeout != 0 { - // Reset the timer and unlock - c.timer_deadline = time.Now().Add(c.config.PreviousTimeout) - c.timer_lock.Unlock() - } - } else if c.config.what == codecMultiline_What_Next && match_failed { - c.flush() - } - // TODO: Split the line if its too big + // TODO(driskell): If we are using previous and we match on the very first line read, + // then this is because we've started in the middle of a multiline event (the first line + // should never match) - so we could potentially offer an option to discard this. + // The benefit would be that when using previous_timeout, we could discard any extraneous + // event data that did not get written in time, if the user so wants it, in order to prevent + // odd incomplete data. It would be a signal from the user, "I will worry about the buffering + // issues my programs may have - you just make sure to write each event either completely or + // partially, always with the FIRST line correct (which could be the important one)." + match_failed := c.config.Negate == c.config.matcher.MatchString(text) + if c.config.what == codecMultiline_What_Previous { + if c.config.PreviousTimeout != 0 { + // Prevent a flush happening while we're modifying the stored data + c.timer_lock.Lock() + } + if match_failed { + c.flush() + } + } + + var text_len int64 = int64(len(text)) + + // Check we don't exceed the max multiline bytes + if check_len := c.buffer_len + text_len + c.buffer_lines; check_len > c.config.MaxMultilineBytes { + // Store partial and flush + overflow := check_len - c.config.MaxMultilineBytes + cut := text_len - overflow + c.end_offset = end_offset - overflow + + c.buffer = append(c.buffer, text[:cut]) + c.buffer_lines++ + c.buffer_len += cut + + c.flush() + + // Append the remaining data to the buffer + start_offset += cut + text = text[cut:] + } + + if len(c.buffer) == 0 { + c.start_offset = start_offset + } + c.end_offset = end_offset + + c.buffer = append(c.buffer, text) + c.buffer_lines++ + c.buffer_len += text_len + + if c.config.what == codecMultiline_What_Previous { + if c.config.PreviousTimeout != 0 { + // Reset the timer and unlock + c.timer_deadline = time.Now().Add(c.config.PreviousTimeout) + c.timer_lock.Unlock() + } + } else if c.config.what == codecMultiline_What_Next && match_failed { + c.flush() + } + // TODO: Split the line if its too big } func (c *CodecMultiline) flush() { - if len(c.buffer) == 0 { - return - } + if len(c.buffer) == 0 { + return + } - text := strings.Join(c.buffer, "\n") + text := strings.Join(c.buffer, "\n") - // Set last offset - this is returned in Teardown so if we're mid multiline and crash, we start this multiline again - c.last_offset = c.end_offset - c.buffer = nil - c.buffer_len = 0 - c.buffer_lines = 0 + // Set last offset - this is returned in Teardown so if we're mid multiline and crash, we start this multiline again + c.last_offset = c.end_offset + c.buffer = nil + c.buffer_len = 0 + c.buffer_lines = 0 - c.callback_func(c.start_offset, c.end_offset, text) + c.callback_func(c.start_offset, c.end_offset, text) } func (c *CodecMultiline) Meter() { - c.meter_lines = c.buffer_lines - c.meter_bytes = c.end_offset - c.last_offset + c.meter_lines = c.buffer_lines + c.meter_bytes = c.end_offset - c.last_offset } func (c *CodecMultiline) Snapshot() *core.Snapshot { - snap := core.NewSnapshot("Multiline Codec") - snap.AddEntry("Pending lines", c.meter_lines) - snap.AddEntry("Pending bytes", c.meter_bytes) - return snap + snap := core.NewSnapshot("Multiline Codec") + snap.AddEntry("Pending lines", c.meter_lines) + snap.AddEntry("Pending bytes", c.meter_bytes) + return snap } func (c *CodecMultiline) deadlineRoutine() { - timer := time.NewTimer(0) + timer := time.NewTimer(0) DeadlineLoop: - for { - select { - case <-c.timer_stop: - timer.Stop() - - // Shutdown signal so end the routine - break DeadlineLoop - case now := <-timer.C: - c.timer_lock.Lock() - - // Have we reached the target time? - if !now.After(c.timer_deadline) { - // Deadline moved, update the timer - timer.Reset(c.timer_deadline.Sub(now)) - c.timer_lock.Unlock() - continue - } - - c.flush() - timer.Reset(c.config.PreviousTimeout) - c.timer_lock.Unlock() - } - } - - c.timer_wait.Done() + for { + select { + case <-c.timer_stop: + timer.Stop() + + // Shutdown signal so end the routine + break DeadlineLoop + case now := <-timer.C: + c.timer_lock.Lock() + + // Have we reached the target time? + if !now.After(c.timer_deadline) { + // Deadline moved, update the timer + timer.Reset(c.timer_deadline.Sub(now)) + c.timer_lock.Unlock() + continue + } + + c.flush() + timer.Reset(c.config.PreviousTimeout) + c.timer_lock.Unlock() + } + } + + c.timer_wait.Done() } // Register the codec func init() { - core.RegisterCodec("multiline", NewMultilineCodecFactory) + core.RegisterCodec("multiline", NewMultilineCodecFactory) } diff --git a/src/lc-lib/codecs/multiline_test.go b/src/lc-lib/codecs/multiline_test.go index 820259cf..87d51188 100644 --- a/src/lc-lib/codecs/multiline_test.go +++ b/src/lc-lib/codecs/multiline_test.go @@ -1,10 +1,10 @@ package codecs import ( - "github.com/driskell/log-courier/src/lc-lib/core" - "sync" - "testing" - "time" + "github.com/driskell/log-courier/src/lc-lib/core" + "sync" + "testing" + "time" ) var multiline_t *testing.T @@ -12,223 +12,223 @@ var multiline_lines int var multiline_lock sync.Mutex func createMultilineCodec(unused map[string]interface{}, callback core.CodecCallbackFunc, t *testing.T) core.Codec { - config := core.NewConfig() - config.General.MaxLineBytes = 1048576 - config.General.SpoolMaxBytes = 10485760 + config := core.NewConfig() + config.General.MaxLineBytes = 1048576 + config.General.SpoolMaxBytes = 10485760 - factory, err := NewMultilineCodecFactory(config, "", unused, "multiline") - if err != nil { - t.Logf("Failed to create multiline codec: %s", err) - t.FailNow() - } + factory, err := NewMultilineCodecFactory(config, "", unused, "multiline") + if err != nil { + t.Logf("Failed to create multiline codec: %s", err) + t.FailNow() + } - return factory.NewCodec(callback, 0) + return factory.NewCodec(callback, 0) } func checkMultiline(start_offset int64, end_offset int64, text string) { - multiline_lock.Lock() - defer multiline_lock.Unlock() - multiline_lines++ - - if multiline_lines == 1 { - if text != "DEBUG First line\nNEXT line\nANOTHER line" { - multiline_t.Logf("Event data incorrect [% X]", text) - multiline_t.FailNow() - } - - if start_offset != 0 { - multiline_t.Logf("Event start offset is incorrect [%d]", start_offset) - multiline_t.FailNow() - } - - if end_offset != 5 { - multiline_t.Logf("Event end offset is incorrect [%d]", end_offset) - multiline_t.FailNow() - } - - return - } - - if text != "DEBUG Next line" { - multiline_t.Logf("Event data incorrect [% X]", text) - multiline_t.FailNow() - } - - if start_offset != 6 { - multiline_t.Logf("Event start offset is incorrect [%d]", start_offset) - multiline_t.FailNow() - } - - if end_offset != 7 { - multiline_t.Logf("Event end offset is incorrect [%d]", end_offset) - multiline_t.FailNow() - } + multiline_lock.Lock() + defer multiline_lock.Unlock() + multiline_lines++ + + if multiline_lines == 1 { + if text != "DEBUG First line\nNEXT line\nANOTHER line" { + multiline_t.Logf("Event data incorrect [% X]", text) + multiline_t.FailNow() + } + + if start_offset != 0 { + multiline_t.Logf("Event start offset is incorrect [%d]", start_offset) + multiline_t.FailNow() + } + + if end_offset != 5 { + multiline_t.Logf("Event end offset is incorrect [%d]", end_offset) + multiline_t.FailNow() + } + + return + } + + if text != "DEBUG Next line" { + multiline_t.Logf("Event data incorrect [% X]", text) + multiline_t.FailNow() + } + + if start_offset != 6 { + multiline_t.Logf("Event start offset is incorrect [%d]", start_offset) + multiline_t.FailNow() + } + + if end_offset != 7 { + multiline_t.Logf("Event end offset is incorrect [%d]", end_offset) + multiline_t.FailNow() + } } func TestMultilinePrevious(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "pattern": "^(ANOTHER|NEXT) ", - "what": "previous", - "negate": false, - }, checkMultiline, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if multiline_lines != 1 { - t.Logf("Wrong line count received") - t.FailNow() - } + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "pattern": "^(ANOTHER|NEXT) ", + "what": "previous", + "negate": false, + }, checkMultiline, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if multiline_lines != 1 { + t.Logf("Wrong line count received") + t.FailNow() + } } func TestMultilinePreviousNegate(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "pattern": "^DEBUG ", - "what": "previous", - "negate": true, - }, checkMultiline, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if multiline_lines != 1 { - t.Logf("Wrong line count received") - t.FailNow() - } + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "pattern": "^DEBUG ", + "what": "previous", + "negate": true, + }, checkMultiline, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if multiline_lines != 1 { + t.Logf("Wrong line count received") + t.FailNow() + } } func TestMultilinePreviousTimeout(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "pattern": "^(ANOTHER|NEXT) ", - "what": "previous", - "negate": false, - "previous timeout": "5s", - }, checkMultiline, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - // Allow 3 seconds - time.Sleep(3 * time.Second) - - multiline_lock.Lock() - if multiline_lines != 1 { - t.Logf("Timeout triggered too early") - t.FailNow() - } - multiline_lock.Unlock() - - // Allow 7 seconds - time.Sleep(7 * time.Second) - - multiline_lock.Lock() - if multiline_lines != 2 { - t.Logf("Wrong line count received") - t.FailNow() - } - multiline_lock.Unlock() - - codec.Teardown() + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "pattern": "^(ANOTHER|NEXT) ", + "what": "previous", + "negate": false, + "previous timeout": "5s", + }, checkMultiline, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + // Allow 3 seconds + time.Sleep(3 * time.Second) + + multiline_lock.Lock() + if multiline_lines != 1 { + t.Logf("Timeout triggered too early") + t.FailNow() + } + multiline_lock.Unlock() + + // Allow 7 seconds + time.Sleep(7 * time.Second) + + multiline_lock.Lock() + if multiline_lines != 2 { + t.Logf("Wrong line count received") + t.FailNow() + } + multiline_lock.Unlock() + + codec.Teardown() } func TestMultilineNext(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "pattern": "^(DEBUG|NEXT) ", - "what": "next", - "negate": false, - }, checkMultiline, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if multiline_lines != 1 { - t.Logf("Wrong line count received") - t.FailNow() - } + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "pattern": "^(DEBUG|NEXT) ", + "what": "next", + "negate": false, + }, checkMultiline, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if multiline_lines != 1 { + t.Logf("Wrong line count received") + t.FailNow() + } } func TestMultilineNextNegate(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "pattern": "^ANOTHER ", - "what": "next", - "negate": true, - }, checkMultiline, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "NEXT line") - codec.Event(4, 5, "ANOTHER line") - codec.Event(6, 7, "DEBUG Next line") - - if multiline_lines != 1 { - t.Logf("Wrong line count received") - t.FailNow() - } + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "pattern": "^ANOTHER ", + "what": "next", + "negate": true, + }, checkMultiline, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "NEXT line") + codec.Event(4, 5, "ANOTHER line") + codec.Event(6, 7, "DEBUG Next line") + + if multiline_lines != 1 { + t.Logf("Wrong line count received") + t.FailNow() + } } func checkMultilineMaxBytes(start_offset int64, end_offset int64, text string) { - multiline_lines++ + multiline_lines++ - if multiline_lines == 1 { - if text != "DEBUG First line\nsecond line\nthi" { - multiline_t.Logf("Event data incorrect [% X]", text) - multiline_t.FailNow() - } + if multiline_lines == 1 { + if text != "DEBUG First line\nsecond line\nthi" { + multiline_t.Logf("Event data incorrect [% X]", text) + multiline_t.FailNow() + } - return - } + return + } - if text != "rd line" { - multiline_t.Logf("Second event data incorrect [% X]", text) - multiline_t.FailNow() - } + if text != "rd line" { + multiline_t.Logf("Second event data incorrect [% X]", text) + multiline_t.FailNow() + } } func TestMultilineMaxBytes(t *testing.T) { - multiline_t = t - multiline_lines = 0 - - codec := createMultilineCodec(map[string]interface{}{ - "max multiline bytes": int64(32), - "pattern": "^DEBUG ", - "negate": true, - }, checkMultilineMaxBytes, t) - - // Send some data - codec.Event(0, 1, "DEBUG First line") - codec.Event(2, 3, "second line") - codec.Event(4, 5, "third line") - codec.Event(6, 7, "DEBUG Next line") - - if multiline_lines != 2 { - t.Logf("Wrong line count received") - t.FailNow() - } + multiline_t = t + multiline_lines = 0 + + codec := createMultilineCodec(map[string]interface{}{ + "max multiline bytes": int64(32), + "pattern": "^DEBUG ", + "negate": true, + }, checkMultilineMaxBytes, t) + + // Send some data + codec.Event(0, 1, "DEBUG First line") + codec.Event(2, 3, "second line") + codec.Event(4, 5, "third line") + codec.Event(6, 7, "DEBUG Next line") + + if multiline_lines != 2 { + t.Logf("Wrong line count received") + t.FailNow() + } } diff --git a/src/lc-lib/codecs/plain.go b/src/lc-lib/codecs/plain.go index 0a5de2a1..00ac97e2 100644 --- a/src/lc-lib/codecs/plain.go +++ b/src/lc-lib/codecs/plain.go @@ -17,49 +17,49 @@ package codecs import ( - "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type CodecPlainFactory struct { } type CodecPlain struct { - last_offset int64 - callback_func core.CodecCallbackFunc + last_offset int64 + callback_func core.CodecCallbackFunc } func NewPlainCodecFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.CodecFactory, error) { - if err := config.ReportUnusedConfig(config_path, unused); err != nil { - return nil, err - } - return &CodecPlainFactory{}, nil + if err := config.ReportUnusedConfig(config_path, unused); err != nil { + return nil, err + } + return &CodecPlainFactory{}, nil } func (f *CodecPlainFactory) NewCodec(callback_func core.CodecCallbackFunc, offset int64) core.Codec { - return &CodecPlain{ - last_offset: offset, - callback_func: callback_func, - } + return &CodecPlain{ + last_offset: offset, + callback_func: callback_func, + } } func (c *CodecPlain) Teardown() int64 { - return c.last_offset + return c.last_offset } func (c *CodecPlain) Event(start_offset int64, end_offset int64, text string) { - c.last_offset = end_offset + c.last_offset = end_offset - c.callback_func(start_offset, end_offset, text) + c.callback_func(start_offset, end_offset, text) } func (c *CodecPlain) Meter() { } func (c *CodecPlain) Snapshot() *core.Snapshot { - return nil + return nil } // Register the codec func init() { - core.RegisterCodec("plain", NewPlainCodecFactory) + core.RegisterCodec("plain", NewPlainCodecFactory) } diff --git a/src/lc-lib/core/codec.go b/src/lc-lib/core/codec.go index e4d73571..5c2cf03a 100644 --- a/src/lc-lib/core/codec.go +++ b/src/lc-lib/core/codec.go @@ -17,16 +17,16 @@ package core type Codec interface { - Teardown() int64 - Event(int64, int64, string) - Meter() - Snapshot() *Snapshot + Teardown() int64 + Event(int64, int64, string) + Meter() + Snapshot() *Snapshot } type CodecCallbackFunc func(int64, int64, string) type CodecFactory interface { - NewCodec(CodecCallbackFunc, int64) Codec + NewCodec(CodecCallbackFunc, int64) Codec } type CodecRegistrarFunc func(*Config, string, map[string]interface{}, string) (CodecFactory, error) @@ -34,13 +34,13 @@ type CodecRegistrarFunc func(*Config, string, map[string]interface{}, string) (C var registered_Codecs map[string]CodecRegistrarFunc = make(map[string]CodecRegistrarFunc) func RegisterCodec(codec string, registrar_func CodecRegistrarFunc) { - registered_Codecs[codec] = registrar_func + registered_Codecs[codec] = registrar_func } func AvailableCodecs() (ret []string) { - ret = make([]string, 0, len(registered_Codecs)) - for k := range registered_Codecs { - ret = append(ret, k) - } - return + ret = make([]string, 0, len(registered_Codecs)) + for k := range registered_Codecs { + ret = append(ret, k) + } + return } diff --git a/src/lc-lib/core/config.go b/src/lc-lib/core/config.go index 60a80244..70b4c0cd 100644 --- a/src/lc-lib/core/config.go +++ b/src/lc-lib/core/config.go @@ -20,368 +20,368 @@ package core import ( - "bytes" - "encoding/json" - "fmt" - "github.com/op/go-logging" - "math" - "os" - "path/filepath" - "reflect" - "strings" - "time" + "bytes" + "encoding/json" + "fmt" + "github.com/op/go-logging" + "math" + "os" + "path/filepath" + "reflect" + "strings" + "time" ) const ( - default_GeneralConfig_AdminEnabled bool = false - default_GeneralConfig_AdminBind string = "tcp:127.0.0.1:1234" - default_GeneralConfig_PersistDir string = "." - default_GeneralConfig_ProspectInterval time.Duration = 10 * time.Second - default_GeneralConfig_SpoolSize int64 = 1024 - default_GeneralConfig_SpoolMaxBytes int64 = 10485760 - default_GeneralConfig_SpoolTimeout time.Duration = 5 * time.Second - default_GeneralConfig_LineBufferBytes int64 = 16384 - default_GeneralConfig_MaxLineBytes int64 = 1048576 - default_GeneralConfig_LogLevel logging.Level = logging.INFO - default_GeneralConfig_LogStdout bool = true - default_GeneralConfig_LogSyslog bool = false - default_NetworkConfig_Transport string = "tls" - default_NetworkConfig_Timeout time.Duration = 15 * time.Second - default_NetworkConfig_Reconnect time.Duration = 1 * time.Second - default_NetworkConfig_MaxPendingPayloads int64 = 10 - default_StreamConfig_Codec string = "plain" - default_StreamConfig_DeadTime int64 = 86400 + default_GeneralConfig_AdminEnabled bool = false + default_GeneralConfig_AdminBind string = "tcp:127.0.0.1:1234" + default_GeneralConfig_PersistDir string = "." + default_GeneralConfig_ProspectInterval time.Duration = 10 * time.Second + default_GeneralConfig_SpoolSize int64 = 1024 + default_GeneralConfig_SpoolMaxBytes int64 = 10485760 + default_GeneralConfig_SpoolTimeout time.Duration = 5 * time.Second + default_GeneralConfig_LineBufferBytes int64 = 16384 + default_GeneralConfig_MaxLineBytes int64 = 1048576 + default_GeneralConfig_LogLevel logging.Level = logging.INFO + default_GeneralConfig_LogStdout bool = true + default_GeneralConfig_LogSyslog bool = false + default_NetworkConfig_Transport string = "tls" + default_NetworkConfig_Timeout time.Duration = 15 * time.Second + default_NetworkConfig_Reconnect time.Duration = 1 * time.Second + default_NetworkConfig_MaxPendingPayloads int64 = 10 + default_StreamConfig_Codec string = "plain" + default_StreamConfig_DeadTime int64 = 86400 ) var ( - default_GeneralConfig_Host string = "localhost.localdomain" + default_GeneralConfig_Host string = "localhost.localdomain" ) type Config struct { - General GeneralConfig `config:"general"` - Network NetworkConfig `config:"network"` - Files []FileConfig `config:"files"` - Includes []string `config:"includes"` - Stdin StreamConfig `config:"stdin"` + General GeneralConfig `config:"general"` + Network NetworkConfig `config:"network"` + Files []FileConfig `config:"files"` + Includes []string `config:"includes"` + Stdin StreamConfig `config:"stdin"` } type GeneralConfig struct { - AdminEnabled bool `config:"admin enabled"` - AdminBind string `config:"admin listen address"` - PersistDir string `config:"persist directory"` - ProspectInterval time.Duration `config:"prospect interval"` - SpoolSize int64 `config:"spool size"` - SpoolMaxBytes int64 `config:"spool max bytes"` - SpoolTimeout time.Duration `config:"spool timeout"` - LineBufferBytes int64 `config:"line buffer bytes"` - MaxLineBytes int64 `config:"max line bytes"` - LogLevel logging.Level `config:"log level"` - LogStdout bool `config:"log stdout"` - LogSyslog bool `config:"log syslog"` - LogFile string `config:"log file"` - Host string `config:"host"` + AdminEnabled bool `config:"admin enabled"` + AdminBind string `config:"admin listen address"` + PersistDir string `config:"persist directory"` + ProspectInterval time.Duration `config:"prospect interval"` + SpoolSize int64 `config:"spool size"` + SpoolMaxBytes int64 `config:"spool max bytes"` + SpoolTimeout time.Duration `config:"spool timeout"` + LineBufferBytes int64 `config:"line buffer bytes"` + MaxLineBytes int64 `config:"max line bytes"` + LogLevel logging.Level `config:"log level"` + LogStdout bool `config:"log stdout"` + LogSyslog bool `config:"log syslog"` + LogFile string `config:"log file"` + Host string `config:"host"` } type NetworkConfig struct { - Transport string `config:"transport"` - Servers []string `config:"servers"` - Timeout time.Duration `config:"timeout"` - Reconnect time.Duration `config:"reconnect"` - MaxPendingPayloads int64 `config:"max pending payloads"` - - Unused map[string]interface{} - TransportFactory TransportFactory + Transport string `config:"transport"` + Servers []string `config:"servers"` + Timeout time.Duration `config:"timeout"` + Reconnect time.Duration `config:"reconnect"` + MaxPendingPayloads int64 `config:"max pending payloads"` + + Unused map[string]interface{} + TransportFactory TransportFactory } type CodecConfigStub struct { - Name string `config:"name"` + Name string `config:"name"` - Unused map[string]interface{} + Unused map[string]interface{} } type StreamConfig struct { - Fields map[string]interface{} `config:"fields"` - Codec CodecConfigStub `config:"codec"` - DeadTime time.Duration `config:"dead time"` + Fields map[string]interface{} `config:"fields"` + Codec CodecConfigStub `config:"codec"` + DeadTime time.Duration `config:"dead time"` - CodecFactory CodecFactory + CodecFactory CodecFactory } type FileConfig struct { - Paths []string `config:"paths"` + Paths []string `config:"paths"` - StreamConfig `config:",embed"` + StreamConfig `config:",embed"` } func NewConfig() *Config { - return &Config{} + return &Config{} } func (c *Config) loadFile(path string) (stripped *bytes.Buffer, err error) { - stripped = new(bytes.Buffer) - - file, err := os.Open(path) - if err != nil { - err = fmt.Errorf("Failed to open config file: %s", err) - return - } - defer file.Close() - - stat, err := file.Stat() - if err != nil { - err = fmt.Errorf("Stat failed for config file: %s", err) - return - } - if stat.Size() > (10 << 20) { - err = fmt.Errorf("Config file too large (%s)", stat.Size()) - return - } - - // Strip comments and read config into stripped - var s, p, state int - { - // Pull the config file into memory - buffer := make([]byte, stat.Size()) - _, err = file.Read(buffer) - if err != nil { - return - } - - for p < len(buffer) { - b := buffer[p] - if state == 0 { - // Main body - if b == '"' { - state = 1 - } else if b == '\'' { - state = 2 - } else if b == '#' { - state = 3 - stripped.Write(buffer[s:p]) - } else if b == '/' { - state = 4 - } - } else if state == 1 { - // Double-quoted string - if b == '\\' { - state = 5 - } else if b == '"' { - state = 0 - } - } else if state == 2 { - // Single-quoted string - if b == '\\' { - state = 6 - } else if b == '\'' { - state = 0 - } - } else if state == 3 { - // End of line comment (#) - if b == '\r' || b == '\n' { - state = 0 - s = p + 1 - } - } else if state == 4 { - // Potential start of multiline comment - if b == '*' { - state = 7 - stripped.Write(buffer[s : p-1]) - } else { - state = 0 - } - } else if state == 5 { - // Escape within double quote - state = 1 - } else if state == 6 { - // Escape within single quote - state = 2 - } else if state == 7 { - // Multiline comment (/**/) - if b == '*' { - state = 8 - } - } else { // state == 8 - // Potential end of multiline comment - if b == '/' { - state = 0 - s = p + 1 - } else { - state = 7 - } - } - p++ - } - stripped.Write(buffer[s:p]) - } - - return + stripped = new(bytes.Buffer) + + file, err := os.Open(path) + if err != nil { + err = fmt.Errorf("Failed to open config file: %s", err) + return + } + defer file.Close() + + stat, err := file.Stat() + if err != nil { + err = fmt.Errorf("Stat failed for config file: %s", err) + return + } + if stat.Size() > (10 << 20) { + err = fmt.Errorf("Config file too large (%s)", stat.Size()) + return + } + + // Strip comments and read config into stripped + var s, p, state int + { + // Pull the config file into memory + buffer := make([]byte, stat.Size()) + _, err = file.Read(buffer) + if err != nil { + return + } + + for p < len(buffer) { + b := buffer[p] + if state == 0 { + // Main body + if b == '"' { + state = 1 + } else if b == '\'' { + state = 2 + } else if b == '#' { + state = 3 + stripped.Write(buffer[s:p]) + } else if b == '/' { + state = 4 + } + } else if state == 1 { + // Double-quoted string + if b == '\\' { + state = 5 + } else if b == '"' { + state = 0 + } + } else if state == 2 { + // Single-quoted string + if b == '\\' { + state = 6 + } else if b == '\'' { + state = 0 + } + } else if state == 3 { + // End of line comment (#) + if b == '\r' || b == '\n' { + state = 0 + s = p + 1 + } + } else if state == 4 { + // Potential start of multiline comment + if b == '*' { + state = 7 + stripped.Write(buffer[s : p-1]) + } else { + state = 0 + } + } else if state == 5 { + // Escape within double quote + state = 1 + } else if state == 6 { + // Escape within single quote + state = 2 + } else if state == 7 { + // Multiline comment (/**/) + if b == '*' { + state = 8 + } + } else { // state == 8 + // Potential end of multiline comment + if b == '/' { + state = 0 + s = p + 1 + } else { + state = 7 + } + } + p++ + } + stripped.Write(buffer[s:p]) + } + + return } // TODO: Config from a TOML? Maybe a custom one func (c *Config) Load(path string) (err error) { - var data *bytes.Buffer - - // Read the main config file - if data, err = c.loadFile(path); err != nil { - return - } - - // Pull the entire structure into raw_config - raw_config := make(map[string]interface{}) - if err = json.Unmarshal(data.Bytes(), &raw_config); err != nil { - return - } - - // Fill in defaults where the zero-value is a valid setting - c.General.AdminEnabled = default_GeneralConfig_AdminEnabled - c.General.LogLevel = default_GeneralConfig_LogLevel - c.General.LogStdout = default_GeneralConfig_LogStdout - c.General.LogSyslog = default_GeneralConfig_LogSyslog - - // Populate configuration - reporting errors on spelling mistakes etc. - if err = c.PopulateConfig(c, "/", raw_config); err != nil { - return - } - - // Iterate includes - for _, glob := range c.Includes { - // Glob the path - var matches []string - if matches, err = filepath.Glob(glob); err != nil { - return - } - - for _, include := range matches { - // Read the include - if data, err = c.loadFile(include); err != nil { - return - } - - // Pull the structure into raw_include - raw_include := make([]interface{}, 0) - if err = json.Unmarshal(data.Bytes(), &raw_include); err != nil { - return - } - - // Append to configuration - if err = c.populateConfigSlice(reflect.ValueOf(c).Elem().FieldByName("Files"), fmt.Sprintf("%s/", include), raw_include); err != nil { - return - } - } - } - - if c.General.AdminBind == "" { - c.General.AdminBind = default_GeneralConfig_AdminBind - } - - if c.General.PersistDir == "" { - c.General.PersistDir = default_GeneralConfig_PersistDir - } - - if c.General.ProspectInterval == time.Duration(0) { - c.General.ProspectInterval = default_GeneralConfig_ProspectInterval - } - - if c.General.SpoolSize == 0 { - c.General.SpoolSize = default_GeneralConfig_SpoolSize - } - - // Enforce maximum of 2 GB since event transmit length is uint32 - if c.General.SpoolMaxBytes == 0 { - c.General.SpoolMaxBytes = default_GeneralConfig_SpoolMaxBytes - } - if c.General.SpoolMaxBytes > 2*1024*1024*1024 { - err = fmt.Errorf("/general/spool max bytes can not be greater than 2 GiB") - return - } - - if c.General.SpoolTimeout == time.Duration(0) { - c.General.SpoolTimeout = default_GeneralConfig_SpoolTimeout - } - - if c.General.LineBufferBytes == 0 { - c.General.LineBufferBytes = default_GeneralConfig_LineBufferBytes - } - if c.General.LineBufferBytes < 1 { - err = fmt.Errorf("/general/line buffer bytes must be greater than 1") - return - } - - // Max line bytes can not be larger than spool max bytes - if c.General.MaxLineBytes == 0 { - c.General.MaxLineBytes = default_GeneralConfig_MaxLineBytes - } - if c.General.MaxLineBytes > c.General.SpoolMaxBytes { - err = fmt.Errorf("/general/max line bytes can not be greater than /general/spool max bytes") - return - } - - if c.General.Host == "" { - ret, err := os.Hostname() - if err == nil { - c.General.Host = ret - } else { - c.General.Host = default_GeneralConfig_Host - log.Warning("Failed to determine the FQDN; using '%s'.", c.General.Host) - } - } - - if c.Network.Transport == "" { - c.Network.Transport = default_NetworkConfig_Transport - } - - if registrar_func, ok := registered_Transports[c.Network.Transport]; ok { - if c.Network.TransportFactory, err = registrar_func(c, "/network/", c.Network.Unused, c.Network.Transport); err != nil { - return - } - } else { - err = fmt.Errorf("Unrecognised transport '%s'", c.Network.Transport) - return - } - - if c.Network.Timeout == time.Duration(0) { - c.Network.Timeout = default_NetworkConfig_Timeout - } - if c.Network.Reconnect == time.Duration(0) { - c.Network.Reconnect = default_NetworkConfig_Reconnect - } - if c.Network.MaxPendingPayloads == 0 { - c.Network.MaxPendingPayloads = default_NetworkConfig_MaxPendingPayloads - } - - for k := range c.Files { - if err = c.initStreamConfig(fmt.Sprintf("/files[%d]/codec/", k), &c.Files[k].StreamConfig); err != nil { - return - } - } - - if err = c.initStreamConfig("/stdin", &c.Stdin); err != nil { - return - } - - return + var data *bytes.Buffer + + // Read the main config file + if data, err = c.loadFile(path); err != nil { + return + } + + // Pull the entire structure into raw_config + raw_config := make(map[string]interface{}) + if err = json.Unmarshal(data.Bytes(), &raw_config); err != nil { + return + } + + // Fill in defaults where the zero-value is a valid setting + c.General.AdminEnabled = default_GeneralConfig_AdminEnabled + c.General.LogLevel = default_GeneralConfig_LogLevel + c.General.LogStdout = default_GeneralConfig_LogStdout + c.General.LogSyslog = default_GeneralConfig_LogSyslog + + // Populate configuration - reporting errors on spelling mistakes etc. + if err = c.PopulateConfig(c, "/", raw_config); err != nil { + return + } + + // Iterate includes + for _, glob := range c.Includes { + // Glob the path + var matches []string + if matches, err = filepath.Glob(glob); err != nil { + return + } + + for _, include := range matches { + // Read the include + if data, err = c.loadFile(include); err != nil { + return + } + + // Pull the structure into raw_include + raw_include := make([]interface{}, 0) + if err = json.Unmarshal(data.Bytes(), &raw_include); err != nil { + return + } + + // Append to configuration + if err = c.populateConfigSlice(reflect.ValueOf(c).Elem().FieldByName("Files"), fmt.Sprintf("%s/", include), raw_include); err != nil { + return + } + } + } + + if c.General.AdminBind == "" { + c.General.AdminBind = default_GeneralConfig_AdminBind + } + + if c.General.PersistDir == "" { + c.General.PersistDir = default_GeneralConfig_PersistDir + } + + if c.General.ProspectInterval == time.Duration(0) { + c.General.ProspectInterval = default_GeneralConfig_ProspectInterval + } + + if c.General.SpoolSize == 0 { + c.General.SpoolSize = default_GeneralConfig_SpoolSize + } + + // Enforce maximum of 2 GB since event transmit length is uint32 + if c.General.SpoolMaxBytes == 0 { + c.General.SpoolMaxBytes = default_GeneralConfig_SpoolMaxBytes + } + if c.General.SpoolMaxBytes > 2*1024*1024*1024 { + err = fmt.Errorf("/general/spool max bytes can not be greater than 2 GiB") + return + } + + if c.General.SpoolTimeout == time.Duration(0) { + c.General.SpoolTimeout = default_GeneralConfig_SpoolTimeout + } + + if c.General.LineBufferBytes == 0 { + c.General.LineBufferBytes = default_GeneralConfig_LineBufferBytes + } + if c.General.LineBufferBytes < 1 { + err = fmt.Errorf("/general/line buffer bytes must be greater than 1") + return + } + + // Max line bytes can not be larger than spool max bytes + if c.General.MaxLineBytes == 0 { + c.General.MaxLineBytes = default_GeneralConfig_MaxLineBytes + } + if c.General.MaxLineBytes > c.General.SpoolMaxBytes { + err = fmt.Errorf("/general/max line bytes can not be greater than /general/spool max bytes") + return + } + + if c.General.Host == "" { + ret, err := os.Hostname() + if err == nil { + c.General.Host = ret + } else { + c.General.Host = default_GeneralConfig_Host + log.Warning("Failed to determine the FQDN; using '%s'.", c.General.Host) + } + } + + if c.Network.Transport == "" { + c.Network.Transport = default_NetworkConfig_Transport + } + + if registrar_func, ok := registered_Transports[c.Network.Transport]; ok { + if c.Network.TransportFactory, err = registrar_func(c, "/network/", c.Network.Unused, c.Network.Transport); err != nil { + return + } + } else { + err = fmt.Errorf("Unrecognised transport '%s'", c.Network.Transport) + return + } + + if c.Network.Timeout == time.Duration(0) { + c.Network.Timeout = default_NetworkConfig_Timeout + } + if c.Network.Reconnect == time.Duration(0) { + c.Network.Reconnect = default_NetworkConfig_Reconnect + } + if c.Network.MaxPendingPayloads == 0 { + c.Network.MaxPendingPayloads = default_NetworkConfig_MaxPendingPayloads + } + + for k := range c.Files { + if err = c.initStreamConfig(fmt.Sprintf("/files[%d]/codec/", k), &c.Files[k].StreamConfig); err != nil { + return + } + } + + if err = c.initStreamConfig("/stdin", &c.Stdin); err != nil { + return + } + + return } func (c *Config) initStreamConfig(path string, stream_config *StreamConfig) (err error) { - if stream_config.Codec.Name == "" { - stream_config.Codec.Name = default_StreamConfig_Codec - } + if stream_config.Codec.Name == "" { + stream_config.Codec.Name = default_StreamConfig_Codec + } - if registrar_func, ok := registered_Codecs[stream_config.Codec.Name]; ok { - if stream_config.CodecFactory, err = registrar_func(c, path, stream_config.Codec.Unused, stream_config.Codec.Name); err != nil { - return - } - } else { - return fmt.Errorf("Unrecognised codec '%s' for %s", stream_config.Codec.Name, path) - } + if registrar_func, ok := registered_Codecs[stream_config.Codec.Name]; ok { + if stream_config.CodecFactory, err = registrar_func(c, path, stream_config.Codec.Unused, stream_config.Codec.Name); err != nil { + return + } + } else { + return fmt.Errorf("Unrecognised codec '%s' for %s", stream_config.Codec.Name, path) + } - if stream_config.DeadTime == time.Duration(0) { - stream_config.DeadTime = time.Duration(default_StreamConfig_DeadTime) * time.Second - } + if stream_config.DeadTime == time.Duration(0) { + stream_config.DeadTime = time.Duration(default_StreamConfig_DeadTime) * time.Second + } - // TODO: EDGE CASE: Event transmit length is uint32, if fields length is rediculous we will fail + // TODO: EDGE CASE: Event transmit length is uint32, if fields length is rediculous we will fail - return nil + return nil } // TODO: This should be pushed to a wrapper or module @@ -390,163 +390,163 @@ func (c *Config) initStreamConfig(path string, stream_config *StreamConfig) (err // or an error is reported if "Unused" is not available // We can then take the unused configuration dynamically at runtime based on another value func (c *Config) PopulateConfig(config interface{}, config_path string, raw_config map[string]interface{}) (err error) { - vconfig := reflect.ValueOf(config).Elem() + vconfig := reflect.ValueOf(config).Elem() FieldLoop: - for i := 0; i < vconfig.NumField(); i++ { - field := vconfig.Field(i) - if !field.CanSet() { - continue - } - fieldtype := vconfig.Type().Field(i) - tag := fieldtype.Tag.Get("config") - mods := strings.Split(tag, ",") - tag = mods[0] - mods = mods[1:] - for _, mod := range mods { - if mod == "embed" && field.Kind() == reflect.Struct { - if err = c.PopulateConfig(field.Addr().Interface(), config_path, raw_config); err != nil { - return - } - continue FieldLoop - } - } - if tag == "" { - continue - } - if _, ok := raw_config[tag]; ok { - if field.Kind() == reflect.Struct { - if reflect.TypeOf(raw_config[tag]).Kind() == reflect.Map { - if err = c.PopulateConfig(field.Addr().Interface(), fmt.Sprintf("%s%s/", config_path, tag), raw_config[tag].(map[string]interface{})); err != nil { - return - } - delete(raw_config, tag) - continue - } else { - err = fmt.Errorf("Option %s%s must be a hash", config_path, tag) - return - } - } - value := reflect.ValueOf(raw_config[tag]) - if value.Type().AssignableTo(field.Type()) { - field.Set(value) - } else if value.Kind() == reflect.Slice && field.Kind() == reflect.Slice { - if err = c.populateConfigSlice(field, fmt.Sprintf("%s%s/", config_path, tag), raw_config[tag].([]interface{})); err != nil { - return - } - } else if value.Kind() == reflect.Map && field.Kind() == reflect.Map { - if field.IsNil() { - field.Set(reflect.MakeMap(field.Type())) - } - for _, j := range value.MapKeys() { - item := value.MapIndex(j) - if item.Elem().Type().AssignableTo(field.Type().Elem()) { - field.SetMapIndex(j, item.Elem()) - } else { - err = fmt.Errorf("Option %s%s[%s] must be %s or similar", config_path, tag, j, field.Type().Elem()) - return - } - } - } else if field.Type().String() == "time.Duration" { - var duration float64 - vduration := reflect.ValueOf(duration) - fail := true - if value.Type().AssignableTo(vduration.Type()) { - duration = value.Float() - if duration >= math.MinInt64 && duration <= math.MaxInt64 { - field.Set(reflect.ValueOf(time.Duration(int64(duration)) * time.Second)) - fail = false - } - } else if value.Kind() == reflect.String { - var tduration time.Duration - if tduration, err = time.ParseDuration(value.String()); err == nil { - field.Set(reflect.ValueOf(tduration)) - fail = false - } - } - if fail { - err = fmt.Errorf("Option %s%s must be a valid numeric or string duration", config_path, tag) - return - } - } else if field.Type().String() == "logging.Level" { - fail := true - if value.Kind() == reflect.String { - var llevel logging.Level - if llevel, err = logging.LogLevel(value.String()); err == nil { - fail = false - field.Set(reflect.ValueOf(llevel)) - } - } - if fail { - err = fmt.Errorf("Option %s%s is not a valid log level (critical, error, warning, notice, info, debug)", config_path, tag) - return - } - } else if field.Kind() == reflect.Int64 { - fail := true - if value.Kind() == reflect.Float64 { - number := value.Float() - if math.Floor(number) == number { - fail = false - field.Set(reflect.ValueOf(int64(number))) - } - } - if fail { - err = fmt.Errorf("Option %s%s is not a valid integer", config_path, tag, field.Type()) - return - } - } else if field.Kind() == reflect.Int { - fail := true - if value.Kind() == reflect.Float64 { - number := value.Float() - if math.Floor(number) == number { - fail = false - field.Set(reflect.ValueOf(int(number))) - } - } - if fail { - err = fmt.Errorf("Option %s%s is not a valid integer", config_path, tag, field.Type()) - return - } - } else { - err = fmt.Errorf("Option %s%s must be %s or similar (%s provided)", config_path, tag, field.Type(), value.Type()) - return - } - delete(raw_config, tag) - } - } - if unused := vconfig.FieldByName("Unused"); unused.IsValid() { - if unused.IsNil() { - unused.Set(reflect.MakeMap(unused.Type())) - } - for k, v := range raw_config { - unused.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(v)) - } - return - } - return c.ReportUnusedConfig(config_path, raw_config) + for i := 0; i < vconfig.NumField(); i++ { + field := vconfig.Field(i) + if !field.CanSet() { + continue + } + fieldtype := vconfig.Type().Field(i) + tag := fieldtype.Tag.Get("config") + mods := strings.Split(tag, ",") + tag = mods[0] + mods = mods[1:] + for _, mod := range mods { + if mod == "embed" && field.Kind() == reflect.Struct { + if err = c.PopulateConfig(field.Addr().Interface(), config_path, raw_config); err != nil { + return + } + continue FieldLoop + } + } + if tag == "" { + continue + } + if _, ok := raw_config[tag]; ok { + if field.Kind() == reflect.Struct { + if reflect.TypeOf(raw_config[tag]).Kind() == reflect.Map { + if err = c.PopulateConfig(field.Addr().Interface(), fmt.Sprintf("%s%s/", config_path, tag), raw_config[tag].(map[string]interface{})); err != nil { + return + } + delete(raw_config, tag) + continue + } else { + err = fmt.Errorf("Option %s%s must be a hash", config_path, tag) + return + } + } + value := reflect.ValueOf(raw_config[tag]) + if value.Type().AssignableTo(field.Type()) { + field.Set(value) + } else if value.Kind() == reflect.Slice && field.Kind() == reflect.Slice { + if err = c.populateConfigSlice(field, fmt.Sprintf("%s%s/", config_path, tag), raw_config[tag].([]interface{})); err != nil { + return + } + } else if value.Kind() == reflect.Map && field.Kind() == reflect.Map { + if field.IsNil() { + field.Set(reflect.MakeMap(field.Type())) + } + for _, j := range value.MapKeys() { + item := value.MapIndex(j) + if item.Elem().Type().AssignableTo(field.Type().Elem()) { + field.SetMapIndex(j, item.Elem()) + } else { + err = fmt.Errorf("Option %s%s[%s] must be %s or similar", config_path, tag, j, field.Type().Elem()) + return + } + } + } else if field.Type().String() == "time.Duration" { + var duration float64 + vduration := reflect.ValueOf(duration) + fail := true + if value.Type().AssignableTo(vduration.Type()) { + duration = value.Float() + if duration >= math.MinInt64 && duration <= math.MaxInt64 { + field.Set(reflect.ValueOf(time.Duration(int64(duration)) * time.Second)) + fail = false + } + } else if value.Kind() == reflect.String { + var tduration time.Duration + if tduration, err = time.ParseDuration(value.String()); err == nil { + field.Set(reflect.ValueOf(tduration)) + fail = false + } + } + if fail { + err = fmt.Errorf("Option %s%s must be a valid numeric or string duration", config_path, tag) + return + } + } else if field.Type().String() == "logging.Level" { + fail := true + if value.Kind() == reflect.String { + var llevel logging.Level + if llevel, err = logging.LogLevel(value.String()); err == nil { + fail = false + field.Set(reflect.ValueOf(llevel)) + } + } + if fail { + err = fmt.Errorf("Option %s%s is not a valid log level (critical, error, warning, notice, info, debug)", config_path, tag) + return + } + } else if field.Kind() == reflect.Int64 { + fail := true + if value.Kind() == reflect.Float64 { + number := value.Float() + if math.Floor(number) == number { + fail = false + field.Set(reflect.ValueOf(int64(number))) + } + } + if fail { + err = fmt.Errorf("Option %s%s is not a valid integer", config_path, tag, field.Type()) + return + } + } else if field.Kind() == reflect.Int { + fail := true + if value.Kind() == reflect.Float64 { + number := value.Float() + if math.Floor(number) == number { + fail = false + field.Set(reflect.ValueOf(int(number))) + } + } + if fail { + err = fmt.Errorf("Option %s%s is not a valid integer", config_path, tag, field.Type()) + return + } + } else { + err = fmt.Errorf("Option %s%s must be %s or similar (%s provided)", config_path, tag, field.Type(), value.Type()) + return + } + delete(raw_config, tag) + } + } + if unused := vconfig.FieldByName("Unused"); unused.IsValid() { + if unused.IsNil() { + unused.Set(reflect.MakeMap(unused.Type())) + } + for k, v := range raw_config { + unused.SetMapIndex(reflect.ValueOf(k), reflect.ValueOf(v)) + } + return + } + return c.ReportUnusedConfig(config_path, raw_config) } func (c *Config) populateConfigSlice(field reflect.Value, config_path string, raw_config []interface{}) (err error) { - elemtype := field.Type().Elem() - if elemtype.Kind() == reflect.Struct { - for j := 0; j < len(raw_config); j++ { - item := reflect.New(elemtype) - if err = c.PopulateConfig(item.Interface(), fmt.Sprintf("%s[%d]/", config_path, j), raw_config[j].(map[string]interface{})); err != nil { - return - } - field.Set(reflect.Append(field, item.Elem())) - } - } else { - for j := 0; j < len(raw_config); j++ { - field.Set(reflect.Append(field, reflect.ValueOf(raw_config[j]))) - } - } - return + elemtype := field.Type().Elem() + if elemtype.Kind() == reflect.Struct { + for j := 0; j < len(raw_config); j++ { + item := reflect.New(elemtype) + if err = c.PopulateConfig(item.Interface(), fmt.Sprintf("%s[%d]/", config_path, j), raw_config[j].(map[string]interface{})); err != nil { + return + } + field.Set(reflect.Append(field, item.Elem())) + } + } else { + for j := 0; j < len(raw_config); j++ { + field.Set(reflect.Append(field, reflect.ValueOf(raw_config[j]))) + } + } + return } func (c *Config) ReportUnusedConfig(config_path string, raw_config map[string]interface{}) (err error) { - for k := range raw_config { - err = fmt.Errorf("Option %s%s is not available", config_path, k) - return - } - return + for k := range raw_config { + err = fmt.Errorf("Option %s%s is not available", config_path, k) + return + } + return } diff --git a/src/lc-lib/core/event.go b/src/lc-lib/core/event.go index 5b9c83a1..d11c0483 100644 --- a/src/lc-lib/core/event.go +++ b/src/lc-lib/core/event.go @@ -17,17 +17,17 @@ package core import ( - "encoding/json" + "encoding/json" ) type Event map[string]interface{} type EventDescriptor struct { - Stream Stream - Offset int64 - Event []byte + Stream Stream + Offset int64 + Event []byte } func (e *Event) Encode() ([]byte, error) { - return json.Marshal(e) + return json.Marshal(e) } diff --git a/src/lc-lib/core/logging.go b/src/lc-lib/core/logging.go index 7a5fd43c..515be966 100644 --- a/src/lc-lib/core/logging.go +++ b/src/lc-lib/core/logging.go @@ -12,7 +12,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package core diff --git a/src/lc-lib/core/pipeline.go b/src/lc-lib/core/pipeline.go index 480e4ddb..d22e8c2e 100644 --- a/src/lc-lib/core/pipeline.go +++ b/src/lc-lib/core/pipeline.go @@ -17,136 +17,136 @@ package core import ( - "sync" + "sync" ) type Pipeline struct { - pipes []IPipelineSegment - signal chan interface{} - group sync.WaitGroup - config_sinks map[*PipelineConfigReceiver]chan *Config - snapshot_chan chan []*Snapshot - snapshot_pipes map[IPipelineSnapshotProvider]IPipelineSnapshotProvider + pipes []IPipelineSegment + signal chan interface{} + group sync.WaitGroup + config_sinks map[*PipelineConfigReceiver]chan *Config + snapshot_chan chan []*Snapshot + snapshot_pipes map[IPipelineSnapshotProvider]IPipelineSnapshotProvider } func NewPipeline() *Pipeline { - return &Pipeline{ - pipes: make([]IPipelineSegment, 0, 5), - signal: make(chan interface{}), - config_sinks: make(map[*PipelineConfigReceiver]chan *Config), - snapshot_chan: make(chan []*Snapshot), - snapshot_pipes: make(map[IPipelineSnapshotProvider]IPipelineSnapshotProvider), - } + return &Pipeline{ + pipes: make([]IPipelineSegment, 0, 5), + signal: make(chan interface{}), + config_sinks: make(map[*PipelineConfigReceiver]chan *Config), + snapshot_chan: make(chan []*Snapshot), + snapshot_pipes: make(map[IPipelineSnapshotProvider]IPipelineSnapshotProvider), + } } func (p *Pipeline) Register(ipipe IPipelineSegment) { - p.group.Add(1) + p.group.Add(1) - pipe := ipipe.getStruct() - pipe.signal = p.signal - pipe.group = &p.group + pipe := ipipe.getStruct() + pipe.signal = p.signal + pipe.group = &p.group - p.pipes = append(p.pipes, ipipe) + p.pipes = append(p.pipes, ipipe) - if ipipe_ext, ok := ipipe.(IPipelineConfigReceiver); ok { - pipe_ext := ipipe_ext.getConfigReceiverStruct() - sink := make(chan *Config) - p.config_sinks[pipe_ext] = sink - pipe_ext.config_chan = sink - } + if ipipe_ext, ok := ipipe.(IPipelineConfigReceiver); ok { + pipe_ext := ipipe_ext.getConfigReceiverStruct() + sink := make(chan *Config) + p.config_sinks[pipe_ext] = sink + pipe_ext.config_chan = sink + } - if ipipe_ext, ok := ipipe.(IPipelineSnapshotProvider); ok { - p.snapshot_pipes[ipipe_ext] = ipipe_ext - } + if ipipe_ext, ok := ipipe.(IPipelineSnapshotProvider); ok { + p.snapshot_pipes[ipipe_ext] = ipipe_ext + } } func (p *Pipeline) Start() { - for _, ipipe := range p.pipes { - go ipipe.Run() - } + for _, ipipe := range p.pipes { + go ipipe.Run() + } } func (p *Pipeline) Shutdown() { - close(p.signal) + close(p.signal) } func (p *Pipeline) Wait() { - p.group.Wait() + p.group.Wait() } func (p *Pipeline) SendConfig(config *Config) { - for _, sink := range p.config_sinks { - sink <- config - } + for _, sink := range p.config_sinks { + sink <- config + } } func (p *Pipeline) Snapshot() *Snapshot { - snap := NewSnapshot("Log Courier") + snap := NewSnapshot("Log Courier") - for _, sink := range p.snapshot_pipes { - for _, sub := range sink.Snapshot() { - snap.AddSub(sub) - } - } + for _, sink := range p.snapshot_pipes { + for _, sub := range sink.Snapshot() { + snap.AddSub(sub) + } + } - return snap + return snap } type IPipelineSegment interface { - Run() - getStruct() *PipelineSegment + Run() + getStruct() *PipelineSegment } type PipelineSegment struct { - signal <-chan interface{} - group *sync.WaitGroup + signal <-chan interface{} + group *sync.WaitGroup } func (s *PipelineSegment) Run() { - panic("Run() not implemented") + panic("Run() not implemented") } func (s *PipelineSegment) getStruct() *PipelineSegment { - return s + return s } func (s *PipelineSegment) OnShutdown() <-chan interface{} { - return s.signal + return s.signal } func (s *PipelineSegment) Done() { - s.group.Done() + s.group.Done() } type IPipelineConfigReceiver interface { - getConfigReceiverStruct() *PipelineConfigReceiver + getConfigReceiverStruct() *PipelineConfigReceiver } type PipelineConfigReceiver struct { - config_chan <-chan *Config + config_chan <-chan *Config } func (s *PipelineConfigReceiver) getConfigReceiverStruct() *PipelineConfigReceiver { - return s + return s } func (s *PipelineConfigReceiver) OnConfig() <-chan *Config { - return s.config_chan + return s.config_chan } type IPipelineSnapshotProvider interface { - Snapshot() []*Snapshot + Snapshot() []*Snapshot } type PipelineSnapshotProvider struct { } func (s *PipelineSnapshotProvider) getSnapshotProviderStruct() *PipelineSnapshotProvider { - return s + return s } func (s *PipelineSnapshotProvider) Snapshot() []*Snapshot { - ret := NewSnapshot("Unknown") - ret.AddEntry("Error", "NotImplemented") - return []*Snapshot{ret} + ret := NewSnapshot("Unknown") + ret.AddEntry("Error", "NotImplemented") + return []*Snapshot{ret} } diff --git a/src/lc-lib/core/snapshot.go b/src/lc-lib/core/snapshot.go index 12551f41..ead2602f 100644 --- a/src/lc-lib/core/snapshot.go +++ b/src/lc-lib/core/snapshot.go @@ -12,80 +12,80 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package core import ( - "sort" + "sort" ) type Snapshot struct { - Desc string - Entries map[string]interface{} - Keys []string - Subs map[string]*Snapshot - SubKeys []string + Desc string + Entries map[string]interface{} + Keys []string + Subs map[string]*Snapshot + SubKeys []string } func NewSnapshot(desc string) *Snapshot { - return &Snapshot{ - Desc: desc, - Entries: make(map[string]interface{}), - Keys: make([]string, 0), - Subs: make(map[string]*Snapshot), - SubKeys: make([]string, 0), - } + return &Snapshot{ + Desc: desc, + Entries: make(map[string]interface{}), + Keys: make([]string, 0), + Subs: make(map[string]*Snapshot), + SubKeys: make([]string, 0), + } } func (s *Snapshot) Sort() { - sort.Strings(s.Keys) - sort.Strings(s.SubKeys) + sort.Strings(s.Keys) + sort.Strings(s.SubKeys) } func (s *Snapshot) Description() string { - return s.Desc + return s.Desc } func (s *Snapshot) AddEntry(name string, value interface{}) { - s.Entries[name] = value - s.Keys = append(s.Keys, name) + s.Entries[name] = value + s.Keys = append(s.Keys, name) } func (s *Snapshot) EntryByName(name string) (interface{}, bool) { - if v, ok := s.Entries[name]; ok { - return v, true - } + if v, ok := s.Entries[name]; ok { + return v, true + } - return nil, false + return nil, false } func (s *Snapshot) Entry(i int) (string, interface{}) { - if i < 0 || i >= len(s.Keys) { - panic("Out of bounds") - } + if i < 0 || i >= len(s.Keys) { + panic("Out of bounds") + } - return s.Keys[i], s.Entries[s.Keys[i]] + return s.Keys[i], s.Entries[s.Keys[i]] } func (s *Snapshot) NumEntries() int { - return len(s.Keys) + return len(s.Keys) } func (s *Snapshot) AddSub(sub *Snapshot) { - desc := sub.Description() - s.Subs[desc] = sub - s.SubKeys = append(s.SubKeys, desc) + desc := sub.Description() + s.Subs[desc] = sub + s.SubKeys = append(s.SubKeys, desc) } func (s *Snapshot) Sub(i int) *Snapshot { - if i < 0 || i >= len(s.SubKeys) { - panic("Out of bounds") - } + if i < 0 || i >= len(s.SubKeys) { + panic("Out of bounds") + } - return s.Subs[s.SubKeys[i]] + return s.Subs[s.SubKeys[i]] } func (s *Snapshot) NumSubs() int { - return len(s.SubKeys) + return len(s.SubKeys) } diff --git a/src/lc-lib/core/stream.go b/src/lc-lib/core/stream.go index 5b096fa3..e07f4ef1 100644 --- a/src/lc-lib/core/stream.go +++ b/src/lc-lib/core/stream.go @@ -20,5 +20,5 @@ import "os" // A stream should be a pointer object that uniquely identified a file stream type Stream interface { - Info() (string, os.FileInfo) + Info() (string, os.FileInfo) } diff --git a/src/lc-lib/core/transport.go b/src/lc-lib/core/transport.go index 1ab49a7f..83e22c97 100644 --- a/src/lc-lib/core/transport.go +++ b/src/lc-lib/core/transport.go @@ -17,22 +17,22 @@ package core const ( - Reload_None = iota - Reload_Servers - Reload_Transport + Reload_None = iota + Reload_Servers + Reload_Transport ) type Transport interface { - ReloadConfig(*NetworkConfig) int - Init() error - CanSend() <-chan int - Write(string, []byte) error - Read() <-chan interface{} - Shutdown() + ReloadConfig(*NetworkConfig) int + Init() error + CanSend() <-chan int + Write(string, []byte) error + Read() <-chan interface{} + Shutdown() } type TransportFactory interface { - NewTransport(*NetworkConfig) (Transport, error) + NewTransport(*NetworkConfig) (Transport, error) } type TransportRegistrarFunc func(*Config, string, map[string]interface{}, string) (TransportFactory, error) @@ -40,13 +40,13 @@ type TransportRegistrarFunc func(*Config, string, map[string]interface{}, string var registered_Transports map[string]TransportRegistrarFunc = make(map[string]TransportRegistrarFunc) func RegisterTransport(transport string, registrar_func TransportRegistrarFunc) { - registered_Transports[transport] = registrar_func + registered_Transports[transport] = registrar_func } func AvailableTransports() (ret []string) { - ret = make([]string, 0, len(registered_Transports)) - for k := range registered_Transports { - ret = append(ret, k) - } - return + ret = make([]string, 0, len(registered_Transports)) + for k := range registered_Transports { + ret = append(ret, k) + } + return } diff --git a/src/lc-lib/core/util.go b/src/lc-lib/core/util.go index f09b6d9f..2bb6ac5e 100644 --- a/src/lc-lib/core/util.go +++ b/src/lc-lib/core/util.go @@ -12,31 +12,31 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package core import ( - "math" - "time" + "math" + "time" ) func CalculateSpeed(duration time.Duration, speed float64, count float64, seconds_no_change *int) float64 { - if count == 0 { - *seconds_no_change++ - } else { - *seconds_no_change = 0 - } + if count == 0 { + *seconds_no_change++ + } else { + *seconds_no_change = 0 + } - if speed == 0. { - return count - } + if speed == 0. { + return count + } - if *seconds_no_change >= 5 { - *seconds_no_change = 0 - return 0. - } + if *seconds_no_change >= 5 { + *seconds_no_change = 0 + return 0. + } - // Calculate a moving average over 5 seconds - use similiar weight as load average - return count + math.Exp(float64(duration) / float64(time.Second) / -5.) * (speed - count) + // Calculate a moving average over 5 seconds - use similiar weight as load average + return count + math.Exp(float64(duration)/float64(time.Second)/-5.)*(speed-count) } diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index 669b8773..62894d8e 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -20,389 +20,389 @@ package harvester import ( - "fmt" - "io" - "github.com/driskell/log-courier/src/lc-lib/core" - "os" - "sync" - "time" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "io" + "os" + "sync" + "time" ) type HarvesterFinish struct { - Last_Offset int64 - Error error + Last_Offset int64 + Error error } type Harvester struct { - sync.RWMutex - - stop_chan chan interface{} - return_chan chan *HarvesterFinish - stream core.Stream - fileinfo os.FileInfo - path string - config *core.Config - stream_config *core.StreamConfig - offset int64 - output chan<- *core.EventDescriptor - codec core.Codec - file *os.File - split bool - - line_speed float64 - byte_speed float64 - line_count uint64 - byte_count uint64 - last_eof_off *int64 - last_eof *time.Time + sync.RWMutex + + stop_chan chan interface{} + return_chan chan *HarvesterFinish + stream core.Stream + fileinfo os.FileInfo + path string + config *core.Config + stream_config *core.StreamConfig + offset int64 + output chan<- *core.EventDescriptor + codec core.Codec + file *os.File + split bool + + line_speed float64 + byte_speed float64 + line_count uint64 + byte_count uint64 + last_eof_off *int64 + last_eof *time.Time } func NewHarvester(stream core.Stream, config *core.Config, stream_config *core.StreamConfig, offset int64) *Harvester { - var fileinfo os.FileInfo - var path string - - if stream != nil { - // Grab now so we can safely use them even if prospector changes them - path, fileinfo = stream.Info() - } else { - // This is stdin - path, fileinfo = "-", nil - } - - ret := &Harvester{ - stop_chan: make(chan interface{}), - return_chan: make(chan *HarvesterFinish, 1), - stream: stream, - fileinfo: fileinfo, - path: path, - config: config, - stream_config: stream_config, - offset: offset, - last_eof: nil, - } - - ret.codec = stream_config.CodecFactory.NewCodec(ret.eventCallback, ret.offset) - - return ret + var fileinfo os.FileInfo + var path string + + if stream != nil { + // Grab now so we can safely use them even if prospector changes them + path, fileinfo = stream.Info() + } else { + // This is stdin + path, fileinfo = "-", nil + } + + ret := &Harvester{ + stop_chan: make(chan interface{}), + return_chan: make(chan *HarvesterFinish, 1), + stream: stream, + fileinfo: fileinfo, + path: path, + config: config, + stream_config: stream_config, + offset: offset, + last_eof: nil, + } + + ret.codec = stream_config.CodecFactory.NewCodec(ret.eventCallback, ret.offset) + + return ret } func (h *Harvester) Start(output chan<- *core.EventDescriptor) { - go func() { - status := &HarvesterFinish{} - status.Last_Offset, status.Error = h.harvest(output) - h.return_chan <- status - }() + go func() { + status := &HarvesterFinish{} + status.Last_Offset, status.Error = h.harvest(output) + h.return_chan <- status + }() } func (h *Harvester) Stop() { - close(h.stop_chan) + close(h.stop_chan) } func (h *Harvester) OnFinish() <-chan *HarvesterFinish { - return h.return_chan + return h.return_chan } func (h *Harvester) harvest(output chan<- *core.EventDescriptor) (int64, error) { - if err := h.prepareHarvester(); err != nil { - return h.offset, err - } - - defer h.file.Close() - - h.output = output - - if h.path == "-" { - log.Info("Started stdin harvester") - h.offset = 0 - } else { - // Get current offset in file - offset, err := h.file.Seek(0, os.SEEK_CUR) - if err != nil { - log.Warning("Failed to determine start offset for %s: %s", h.path, err) - return h.offset, err - } - - if h.offset != offset { - log.Warning("Started harvester at position %d (requested %d): %s", offset, h.offset, h.path) - } else { - log.Info("Started harvester at position %d (requested %d): %s", offset, h.offset, h.path) - } - - h.offset = offset - } - - // The buffer size limits the maximum line length we can read, including terminator - reader := NewLineReader(h.file, int(h.config.General.LineBufferBytes), int(h.config.General.MaxLineBytes)) - - // TODO: Make configurable? - read_timeout := 10 * time.Second - - last_read_time := time.Now() - last_line_count := uint64(0) - last_byte_count := uint64(0) - last_measurement := last_read_time - seconds_without_events := 0 + if err := h.prepareHarvester(); err != nil { + return h.offset, err + } + + defer h.file.Close() + + h.output = output + + if h.path == "-" { + log.Info("Started stdin harvester") + h.offset = 0 + } else { + // Get current offset in file + offset, err := h.file.Seek(0, os.SEEK_CUR) + if err != nil { + log.Warning("Failed to determine start offset for %s: %s", h.path, err) + return h.offset, err + } + + if h.offset != offset { + log.Warning("Started harvester at position %d (requested %d): %s", offset, h.offset, h.path) + } else { + log.Info("Started harvester at position %d (requested %d): %s", offset, h.offset, h.path) + } + + h.offset = offset + } + + // The buffer size limits the maximum line length we can read, including terminator + reader := NewLineReader(h.file, int(h.config.General.LineBufferBytes), int(h.config.General.MaxLineBytes)) + + // TODO: Make configurable? + read_timeout := 10 * time.Second + + last_read_time := time.Now() + last_line_count := uint64(0) + last_byte_count := uint64(0) + last_measurement := last_read_time + seconds_without_events := 0 ReadLoop: - for { - text, bytesread, err := h.readline(reader) - - if duration := time.Since(last_measurement); duration >= time.Second { - h.Lock() - - h.line_speed = core.CalculateSpeed(duration, h.line_speed, float64(h.line_count - last_line_count), &seconds_without_events) - h.byte_speed = core.CalculateSpeed(duration, h.byte_speed, float64(h.byte_count - last_byte_count), &seconds_without_events) - - last_byte_count = h.byte_count - last_line_count = h.line_count - last_measurement = time.Now() - - h.codec.Meter() - - h.last_eof = nil - - h.Unlock() - - // Check shutdown - select { - case <-h.stop_chan: - break ReadLoop - default: - } - } - - if err == nil { - line_offset := h.offset - h.offset += int64(bytesread) - - // Codec is last - it forwards harvester state for us such as offset for resume - h.codec.Event(line_offset, h.offset, text) - - last_read_time = time.Now() - h.line_count++ - h.byte_count += uint64(bytesread) - - continue - } - - if err != io.EOF { - if h.path == "-" { - log.Error("Unexpected error reading from stdin: %s", err) - } else { - log.Error("Unexpected error reading from %s: %s", h.path, err) - } - return h.codec.Teardown(), err - } - - if h.path == "-" { - // Stdin has finished - stdin blocks permanently until the stream ends - // Once the stream ends, finish the harvester - log.Info("Stopping harvest of stdin; EOF reached") - return h.codec.Teardown(), nil - } - - // Check shutdown - select { - case <-h.stop_chan: - break ReadLoop - default: - } - - h.Lock() - if h.last_eof_off == nil { - h.last_eof_off = new(int64) - } - *h.last_eof_off = h.offset - - if h.last_eof == nil { - h.last_eof = new(time.Time) - } - *h.last_eof = last_read_time - h.Unlock() - - // Don't check for truncation until we hit the full read_timeout - if time.Since(last_read_time) < read_timeout { - continue - } - - info, err := h.file.Stat() - if err != nil { - log.Error("Unexpected error checking status of %s: %s", h.path, err) - return h.codec.Teardown(), err - } - - if info.Size() < h.offset { - log.Warning("Unexpected file truncation, seeking to beginning: %s", h.path) - h.file.Seek(0, os.SEEK_SET) - h.offset = 0 - // TODO: How does this impact a partial line reader buffer? - // TODO: How does this imapct multiline? - continue - } - - if age := time.Since(last_read_time); age > h.stream_config.DeadTime { - // if last_read_time was more than dead time, this file is probably dead. Stop watching it. - log.Info("Stopping harvest of %s; last change was %v ago", h.path, age-(age%time.Second)) - // TODO: We should return a Stat() from before we attempted to read - // In prospector we use that for comparison to resume - // This prevents a potential race condition if we stop just as the - // file is modified with extra lines... - return h.codec.Teardown(), nil - } - } - - log.Info("Harvester for %s exiting", h.path) - return h.codec.Teardown(), nil + for { + text, bytesread, err := h.readline(reader) + + if duration := time.Since(last_measurement); duration >= time.Second { + h.Lock() + + h.line_speed = core.CalculateSpeed(duration, h.line_speed, float64(h.line_count-last_line_count), &seconds_without_events) + h.byte_speed = core.CalculateSpeed(duration, h.byte_speed, float64(h.byte_count-last_byte_count), &seconds_without_events) + + last_byte_count = h.byte_count + last_line_count = h.line_count + last_measurement = time.Now() + + h.codec.Meter() + + h.last_eof = nil + + h.Unlock() + + // Check shutdown + select { + case <-h.stop_chan: + break ReadLoop + default: + } + } + + if err == nil { + line_offset := h.offset + h.offset += int64(bytesread) + + // Codec is last - it forwards harvester state for us such as offset for resume + h.codec.Event(line_offset, h.offset, text) + + last_read_time = time.Now() + h.line_count++ + h.byte_count += uint64(bytesread) + + continue + } + + if err != io.EOF { + if h.path == "-" { + log.Error("Unexpected error reading from stdin: %s", err) + } else { + log.Error("Unexpected error reading from %s: %s", h.path, err) + } + return h.codec.Teardown(), err + } + + if h.path == "-" { + // Stdin has finished - stdin blocks permanently until the stream ends + // Once the stream ends, finish the harvester + log.Info("Stopping harvest of stdin; EOF reached") + return h.codec.Teardown(), nil + } + + // Check shutdown + select { + case <-h.stop_chan: + break ReadLoop + default: + } + + h.Lock() + if h.last_eof_off == nil { + h.last_eof_off = new(int64) + } + *h.last_eof_off = h.offset + + if h.last_eof == nil { + h.last_eof = new(time.Time) + } + *h.last_eof = last_read_time + h.Unlock() + + // Don't check for truncation until we hit the full read_timeout + if time.Since(last_read_time) < read_timeout { + continue + } + + info, err := h.file.Stat() + if err != nil { + log.Error("Unexpected error checking status of %s: %s", h.path, err) + return h.codec.Teardown(), err + } + + if info.Size() < h.offset { + log.Warning("Unexpected file truncation, seeking to beginning: %s", h.path) + h.file.Seek(0, os.SEEK_SET) + h.offset = 0 + // TODO: How does this impact a partial line reader buffer? + // TODO: How does this imapct multiline? + continue + } + + if age := time.Since(last_read_time); age > h.stream_config.DeadTime { + // if last_read_time was more than dead time, this file is probably dead. Stop watching it. + log.Info("Stopping harvest of %s; last change was %v ago", h.path, age-(age%time.Second)) + // TODO: We should return a Stat() from before we attempted to read + // In prospector we use that for comparison to resume + // This prevents a potential race condition if we stop just as the + // file is modified with extra lines... + return h.codec.Teardown(), nil + } + } + + log.Info("Harvester for %s exiting", h.path) + return h.codec.Teardown(), nil } func (h *Harvester) eventCallback(start_offset int64, end_offset int64, text string) { - event := core.Event{ - "host": h.config.General.Host, - "path": h.path, - "offset": start_offset, - "message": text, - } - for k := range h.stream_config.Fields { - event[k] = h.stream_config.Fields[k] - } - - // If we split any of the line data, tag it - if h.split { - if v, ok := event["tags"]; ok { - if v, ok := v.([]string); ok { - v = append(v, "splitline") - } - } else { - event["tags"] = []string{"splitline"} - } - h.split = false - } - - encoded, err := event.Encode() - if err != nil { - // This should never happen - log and skip if it does - log.Warning("Skipping line in %s at offset %d due to encoding failure: %s", h.path, start_offset, err) - return - } - - desc := &core.EventDescriptor{ - Stream: h.stream, - Offset: end_offset, - Event: encoded, - } + event := core.Event{ + "host": h.config.General.Host, + "path": h.path, + "offset": start_offset, + "message": text, + } + for k := range h.stream_config.Fields { + event[k] = h.stream_config.Fields[k] + } + + // If we split any of the line data, tag it + if h.split { + if v, ok := event["tags"]; ok { + if v, ok := v.([]string); ok { + v = append(v, "splitline") + } + } else { + event["tags"] = []string{"splitline"} + } + h.split = false + } + + encoded, err := event.Encode() + if err != nil { + // This should never happen - log and skip if it does + log.Warning("Skipping line in %s at offset %d due to encoding failure: %s", h.path, start_offset, err) + return + } + + desc := &core.EventDescriptor{ + Stream: h.stream, + Offset: end_offset, + Event: encoded, + } EventLoop: - for { - select { - case <-h.stop_chan: - break EventLoop - case h.output <- desc: - break EventLoop - } - } + for { + select { + case <-h.stop_chan: + break EventLoop + case h.output <- desc: + break EventLoop + } + } } func (h *Harvester) prepareHarvester() error { - // Special handling that "-" means to read from standard input - if h.path == "-" { - h.file = os.Stdin - return nil - } - - var err error - h.file, err = h.openFile(h.path) - if err != nil { - log.Error("Failed opening %s: %s", h.path, err) - return err - } - - // Check we opened the right file - info, err := h.file.Stat() - if err != nil { - h.file.Close() - return err - } - - if !os.SameFile(info, h.fileinfo) { - h.file.Close() - return fmt.Errorf("Not the same file") - } - - // TODO: Check error? - h.file.Seek(h.offset, os.SEEK_SET) - - return nil + // Special handling that "-" means to read from standard input + if h.path == "-" { + h.file = os.Stdin + return nil + } + + var err error + h.file, err = h.openFile(h.path) + if err != nil { + log.Error("Failed opening %s: %s", h.path, err) + return err + } + + // Check we opened the right file + info, err := h.file.Stat() + if err != nil { + h.file.Close() + return err + } + + if !os.SameFile(info, h.fileinfo) { + h.file.Close() + return fmt.Errorf("Not the same file") + } + + // TODO: Check error? + h.file.Seek(h.offset, os.SEEK_SET) + + return nil } func (h *Harvester) readline(reader *LineReader) (string, int, error) { - var newline int - - line, err := reader.ReadSlice() - - if line != nil { - if err == nil { - // Line will always end in '\n' if no error, but check also for CR - if len(line) > 1 && line[len(line)-2] == '\r' { - newline = 2 - } else { - newline = 1 - } - } else if err == ErrLineTooLong { - h.split = true - err = nil - } - - // Return the line along with the length including line ending - length := len(line) - // We use string() to copy the memory, which is a slice of the line buffer we need to re-use - return string(line[:length-newline]), length, err - } - - if err != nil { - if err != io.EOF { - // Pass back error to tear down harvester - return "", 0, err - } - - // Backoff - select { - case <-h.stop_chan: - case <-time.After(1 * time.Second): - } - } - - return "", 0, io.EOF + var newline int + + line, err := reader.ReadSlice() + + if line != nil { + if err == nil { + // Line will always end in '\n' if no error, but check also for CR + if len(line) > 1 && line[len(line)-2] == '\r' { + newline = 2 + } else { + newline = 1 + } + } else if err == ErrLineTooLong { + h.split = true + err = nil + } + + // Return the line along with the length including line ending + length := len(line) + // We use string() to copy the memory, which is a slice of the line buffer we need to re-use + return string(line[:length-newline]), length, err + } + + if err != nil { + if err != io.EOF { + // Pass back error to tear down harvester + return "", 0, err + } + + // Backoff + select { + case <-h.stop_chan: + case <-time.After(1 * time.Second): + } + } + + return "", 0, io.EOF } func (h *Harvester) Snapshot() *core.Snapshot { - h.RLock() - - ret := core.NewSnapshot("Harvester") - ret.AddEntry("Speed (Lps)", h.line_speed) - ret.AddEntry("Speed (Bps)", h.byte_speed) - ret.AddEntry("Processed lines", h.line_count) - ret.AddEntry("Current offset", h.offset) - if h.last_eof_off == nil { - ret.AddEntry("Last EOF Offset", "Never") - } else { - ret.AddEntry("Last EOF Offset", h.last_eof_off) - } - if h.last_eof == nil { - ret.AddEntry("Status", "Alive") - } else { - ret.AddEntry("Status", "Idle") - if age := time.Since(*h.last_eof); age < h.stream_config.DeadTime { - ret.AddEntry("Dead timer", h.stream_config.DeadTime-age) - } else { - ret.AddEntry("Dead timer", "0s") - } - } - - if sub_snap := h.codec.Snapshot(); sub_snap != nil { - ret.AddSub(sub_snap) - } - - h.RUnlock() - - return ret + h.RLock() + + ret := core.NewSnapshot("Harvester") + ret.AddEntry("Speed (Lps)", h.line_speed) + ret.AddEntry("Speed (Bps)", h.byte_speed) + ret.AddEntry("Processed lines", h.line_count) + ret.AddEntry("Current offset", h.offset) + if h.last_eof_off == nil { + ret.AddEntry("Last EOF Offset", "Never") + } else { + ret.AddEntry("Last EOF Offset", h.last_eof_off) + } + if h.last_eof == nil { + ret.AddEntry("Status", "Alive") + } else { + ret.AddEntry("Status", "Idle") + if age := time.Since(*h.last_eof); age < h.stream_config.DeadTime { + ret.AddEntry("Dead timer", h.stream_config.DeadTime-age) + } else { + ret.AddEntry("Dead timer", "0s") + } + } + + if sub_snap := h.codec.Snapshot(); sub_snap != nil { + ret.AddSub(sub_snap) + } + + h.RUnlock() + + return ret } diff --git a/src/lc-lib/harvester/harvester_other.go b/src/lc-lib/harvester/harvester_other.go index c8f95d85..7a7468dc 100644 --- a/src/lc-lib/harvester/harvester_other.go +++ b/src/lc-lib/harvester/harvester_other.go @@ -19,9 +19,9 @@ package harvester import ( - "os" + "os" ) func (h *Harvester) openFile(path string) (*os.File, error) { - return os.Open(path) + return os.Open(path) } diff --git a/src/lc-lib/harvester/harvester_windows.go b/src/lc-lib/harvester/harvester_windows.go index 9f224da0..da44af27 100644 --- a/src/lc-lib/harvester/harvester_windows.go +++ b/src/lc-lib/harvester/harvester_windows.go @@ -17,26 +17,26 @@ package harvester import ( - "os" - "syscall" + "os" + "syscall" ) func (h *Harvester) openFile(path string) (*os.File, error) { - // We will call CreateFile directly so we can pass in FILE_SHARE_DELETE - // This ensures that a program can still rotate the file even though we have it open - pathp, err := syscall.UTF16PtrFromString(path) - if err != nil { - return nil, err - } + // We will call CreateFile directly so we can pass in FILE_SHARE_DELETE + // This ensures that a program can still rotate the file even though we have it open + pathp, err := syscall.UTF16PtrFromString(path) + if err != nil { + return nil, err + } - var sa *syscall.SecurityAttributes + var sa *syscall.SecurityAttributes - handle, err := syscall.CreateFile( - pathp, syscall.GENERIC_READ, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE, - sa, syscall.OPEN_EXISTING, syscall.FILE_ATTRIBUTE_NORMAL, 0) - if err != nil { - return nil, err - } + handle, err := syscall.CreateFile( + pathp, syscall.GENERIC_READ, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE, + sa, syscall.OPEN_EXISTING, syscall.FILE_ATTRIBUTE_NORMAL, 0) + if err != nil { + return nil, err + } - return os.NewFile(uintptr(handle), path), nil + return os.NewFile(uintptr(handle), path), nil } diff --git a/src/lc-lib/harvester/linereader.go b/src/lc-lib/harvester/linereader.go index d068ede9..d729a05d 100644 --- a/src/lc-lib/harvester/linereader.go +++ b/src/lc-lib/harvester/linereader.go @@ -12,107 +12,107 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package harvester import ( - "bytes" - "errors" - "io" + "bytes" + "errors" + "io" ) var ( - ErrLineTooLong = errors.New("LineReader: line too long") + ErrLineTooLong = errors.New("LineReader: line too long") ) // A read interface to tail type LineReader struct { - rd io.Reader - buf []byte - overflow [][]byte - size int - max_line int - cur_max int - start int - end int - err error + rd io.Reader + buf []byte + overflow [][]byte + size int + max_line int + cur_max int + start int + end int + err error } func NewLineReader(rd io.Reader, size int, max_line int) *LineReader { - lr := &LineReader{ - rd: rd, - buf: make([]byte, size), - size: size, - max_line: max_line, - cur_max: max_line, - } - - return lr + lr := &LineReader{ + rd: rd, + buf: make([]byte, size), + size: size, + max_line: max_line, + cur_max: max_line, + } + + return lr } func (lr *LineReader) Reset() { - lr.start = 0 + lr.start = 0 } func (lr *LineReader) ReadSlice() ([]byte, error) { - var err error - var line []byte - - if lr.end == 0 { - err = lr.fill() - } - - for { - if n := bytes.IndexByte(lr.buf[lr.start:lr.end], '\n'); n >= 0 && n < lr.cur_max { - line = lr.buf[lr.start:lr.start+n+1] - lr.start += n + 1 - err = nil - break - } - - if err != nil { - return nil, err - } - - if lr.end - lr.start >= lr.cur_max { - line = lr.buf[lr.start:lr.start+lr.cur_max] - lr.start += lr.cur_max - err = ErrLineTooLong - break - } - - if lr.end - lr.start >= len(lr.buf) { - lr.start, lr.end = 0, 0 - if lr.overflow == nil { - lr.overflow = make([][]byte, 0, 1) - } - lr.overflow = append(lr.overflow, lr.buf) - lr.cur_max -= len(lr.buf) - lr.buf = make([]byte, lr.size) - } - - err = lr.fill() - } - - if lr.overflow != nil { - lr.overflow = append(lr.overflow, line) - line = bytes.Join(lr.overflow, []byte{}) - lr.overflow = nil - lr.cur_max = lr.max_line - } - return line, err + var err error + var line []byte + + if lr.end == 0 { + err = lr.fill() + } + + for { + if n := bytes.IndexByte(lr.buf[lr.start:lr.end], '\n'); n >= 0 && n < lr.cur_max { + line = lr.buf[lr.start : lr.start+n+1] + lr.start += n + 1 + err = nil + break + } + + if err != nil { + return nil, err + } + + if lr.end-lr.start >= lr.cur_max { + line = lr.buf[lr.start : lr.start+lr.cur_max] + lr.start += lr.cur_max + err = ErrLineTooLong + break + } + + if lr.end-lr.start >= len(lr.buf) { + lr.start, lr.end = 0, 0 + if lr.overflow == nil { + lr.overflow = make([][]byte, 0, 1) + } + lr.overflow = append(lr.overflow, lr.buf) + lr.cur_max -= len(lr.buf) + lr.buf = make([]byte, lr.size) + } + + err = lr.fill() + } + + if lr.overflow != nil { + lr.overflow = append(lr.overflow, line) + line = bytes.Join(lr.overflow, []byte{}) + lr.overflow = nil + lr.cur_max = lr.max_line + } + return line, err } func (lr *LineReader) fill() error { - if lr.start != 0 { - copy(lr.buf, lr.buf[lr.start:lr.end]) - lr.end -= lr.start - lr.start = 0 - } + if lr.start != 0 { + copy(lr.buf, lr.buf[lr.start:lr.end]) + lr.end -= lr.start + lr.start = 0 + } - n, err := lr.rd.Read(lr.buf[lr.end:]) - lr.end += n + n, err := lr.rd.Read(lr.buf[lr.end:]) + lr.end += n - return err + return err } diff --git a/src/lc-lib/harvester/linereader_test.go b/src/lc-lib/harvester/linereader_test.go index 59903c9c..ed899072 100644 --- a/src/lc-lib/harvester/linereader_test.go +++ b/src/lc-lib/harvester/linereader_test.go @@ -12,127 +12,126 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package harvester import ( - "bytes" - "testing" + "bytes" + "testing" ) func checkLine(t *testing.T, reader *LineReader, expected []byte) { - line, err := reader.ReadSlice() - if line == nil { - t.Log("No line returned") - t.FailNow() - } - if !bytes.Equal(line, expected) { - t.Logf("Line data incorrect: [% X]", line) - t.FailNow() - } - if err != nil { - t.Logf("Unexpected error: %s", err) - t.FailNow() - } + line, err := reader.ReadSlice() + if line == nil { + t.Log("No line returned") + t.FailNow() + } + if !bytes.Equal(line, expected) { + t.Logf("Line data incorrect: [% X]", line) + t.FailNow() + } + if err != nil { + t.Logf("Unexpected error: %s", err) + t.FailNow() + } } func checkLineTooLong(t *testing.T, reader *LineReader, expected []byte) { - line, err := reader.ReadSlice() - if line == nil { - t.Log("No line returned") - t.FailNow() - } - if !bytes.Equal(line, expected) { - t.Logf("Line data incorrect: [% X]", line) - t.FailNow() - } - if err != ErrLineTooLong { - t.Logf("Unexpected error: %s", err) - t.FailNow() - } + line, err := reader.ReadSlice() + if line == nil { + t.Log("No line returned") + t.FailNow() + } + if !bytes.Equal(line, expected) { + t.Logf("Line data incorrect: [% X]", line) + t.FailNow() + } + if err != ErrLineTooLong { + t.Logf("Unexpected error: %s", err) + t.FailNow() + } } func checkEnd(t *testing.T, reader *LineReader) { - line, err := reader.ReadSlice() - if line != nil { - t.Log("Unexpected line returned") - t.FailNow() - } - if err == nil { - t.Log("Expected error") - t.FailNow() - } + line, err := reader.ReadSlice() + if line != nil { + t.Log("Unexpected line returned") + t.FailNow() + } + if err == nil { + t.Log("Expected error") + t.FailNow() + } } func TestLineRead(t *testing.T) { - data := bytes.NewBufferString("12345678901234567890\n12345678901234567890\n") + data := bytes.NewBufferString("12345678901234567890\n12345678901234567890\n") - // New line read with 100 bytes, enough for the above - reader := NewLineReader(data, 100, 100) + // New line read with 100 bytes, enough for the above + reader := NewLineReader(data, 100, 100) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } func TestLineReadEmpty(t *testing.T) { - data := bytes.NewBufferString("\n12345678901234567890\n") + data := bytes.NewBufferString("\n12345678901234567890\n") - // New line read with 100 bytes, enough for the above - reader := NewLineReader(data, 100, 100) + // New line read with 100 bytes, enough for the above + reader := NewLineReader(data, 100, 100) - checkLine(t, reader, []byte("\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } func TestLineReadIncomplete(t *testing.T) { - data := bytes.NewBufferString("\n12345678901234567890\n123456") + data := bytes.NewBufferString("\n12345678901234567890\n123456") - // New line read with 100 bytes, enough for the above - reader := NewLineReader(data, 100, 100) + // New line read with 100 bytes, enough for the above + reader := NewLineReader(data, 100, 100) - checkLine(t, reader, []byte("\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } func TestLineReadOverflow(t *testing.T) { - data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") + data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") - // New line read with 21 bytes buffer but 100 max line to trigger overflow - reader := NewLineReader(data, 21, 100) + // New line read with 21 bytes buffer but 100 max line to trigger overflow + reader := NewLineReader(data, 21, 100) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkLine(t, reader, []byte("123456789012345678901234567890\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkLine(t, reader, []byte("123456789012345678901234567890\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } - func TestLineReadOverflowTooLong(t *testing.T) { - data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") + data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") - // New line read with 10 bytes buffer and 21 max line length - reader := NewLineReader(data, 10, 21) + // New line read with 10 bytes buffer and 21 max line length + reader := NewLineReader(data, 10, 21) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkLineTooLong(t, reader, []byte("123456789012345678901")) - checkLine(t, reader, []byte("234567890\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkLineTooLong(t, reader, []byte("123456789012345678901")) + checkLine(t, reader, []byte("234567890\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } func TestLineReadTooLong(t *testing.T) { - data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") + data := bytes.NewBufferString("12345678901234567890\n123456789012345678901234567890\n12345678901234567890\n") - // New line read with ample buffer and 21 max line length - reader := NewLineReader(data, 100, 21) + // New line read with ample buffer and 21 max line length + reader := NewLineReader(data, 100, 21) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkLineTooLong(t, reader, []byte("123456789012345678901")) - checkLine(t, reader, []byte("234567890\n")) - checkLine(t, reader, []byte("12345678901234567890\n")) - checkEnd(t, reader) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkLineTooLong(t, reader, []byte("123456789012345678901")) + checkLine(t, reader, []byte("234567890\n")) + checkLine(t, reader, []byte("12345678901234567890\n")) + checkEnd(t, reader) } diff --git a/src/lc-lib/harvester/logging.go b/src/lc-lib/harvester/logging.go index 3a363141..df6f8a86 100644 --- a/src/lc-lib/harvester/logging.go +++ b/src/lc-lib/harvester/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("harvester") + log = logging.MustGetLogger("harvester") } diff --git a/src/lc-lib/prospector/errors.go b/src/lc-lib/prospector/errors.go index a2c01a5e..495f80d2 100644 --- a/src/lc-lib/prospector/errors.go +++ b/src/lc-lib/prospector/errors.go @@ -12,18 +12,18 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package prospector type ProspectorSkipError struct { - message string + message string } func newProspectorSkipError(message string) *ProspectorSkipError { - return &ProspectorSkipError{message: message} + return &ProspectorSkipError{message: message} } func (e *ProspectorSkipError) Error() string { - return e.message + return e.message } diff --git a/src/lc-lib/prospector/info.go b/src/lc-lib/prospector/info.go index bbcec315..6bcdbaab 100644 --- a/src/lc-lib/prospector/info.go +++ b/src/lc-lib/prospector/info.go @@ -17,77 +17,77 @@ package prospector import ( - "github.com/driskell/log-courier/src/lc-lib/core" - "github.com/driskell/log-courier/src/lc-lib/harvester" - "github.com/driskell/log-courier/src/lc-lib/registrar" - "os" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "os" ) const ( - Status_Ok = iota - Status_Resume - Status_Failed - Status_Invalid + Status_Ok = iota + Status_Resume + Status_Failed + Status_Invalid ) const ( - Orphaned_No = iota - Orphaned_Maybe - Orphaned_Yes + Orphaned_No = iota + Orphaned_Maybe + Orphaned_Yes ) type prospectorInfo struct { - file string - identity registrar.FileIdentity - last_seen uint32 - status int - running bool - orphaned int - finish_offset int64 - harvester *harvester.Harvester - err error + file string + identity registrar.FileIdentity + last_seen uint32 + status int + running bool + orphaned int + finish_offset int64 + harvester *harvester.Harvester + err error } func newProspectorInfoFromFileState(file string, filestate *registrar.FileState) *prospectorInfo { - return &prospectorInfo{ - file: file, - identity: filestate, - status: Status_Resume, - finish_offset: filestate.Offset, - } + return &prospectorInfo{ + file: file, + identity: filestate, + status: Status_Resume, + finish_offset: filestate.Offset, + } } func newProspectorInfoFromFileInfo(file string, fileinfo os.FileInfo) *prospectorInfo { - return &prospectorInfo{ - file: file, - identity: registrar.NewFileInfo(fileinfo), // fileinfo is nil for stdin - } + return &prospectorInfo{ + file: file, + identity: registrar.NewFileInfo(fileinfo), // fileinfo is nil for stdin + } } func newProspectorInfoInvalid(file string, err error) *prospectorInfo { - return &prospectorInfo{ - file: file, - err: err, - status: Status_Invalid, - } + return &prospectorInfo{ + file: file, + err: err, + status: Status_Invalid, + } } func (pi *prospectorInfo) Info() (string, os.FileInfo) { - return pi.file, pi.identity.Stat() + return pi.file, pi.identity.Stat() } func (pi *prospectorInfo) isRunning() bool { - if !pi.running { - return false - } + if !pi.running { + return false + } - select { - case status := <-pi.harvester.OnFinish(): - pi.setHarvesterStopped(status) - default: - } + select { + case status := <-pi.harvester.OnFinish(): + pi.setHarvesterStopped(status) + default: + } - return pi.running + return pi.running } /*func (pi *prospectorInfo) ShutdownSignal() <-chan interface{} { @@ -95,39 +95,39 @@ func (pi *prospectorInfo) isRunning() bool { }*/ func (pi *prospectorInfo) stop() { - if !pi.running { - return - } - pi.harvester.Stop() + if !pi.running { + return + } + pi.harvester.Stop() } func (pi *prospectorInfo) wait() { - if !pi.running { - return - } - status := <-pi.harvester.OnFinish() - pi.setHarvesterStopped(status) + if !pi.running { + return + } + status := <-pi.harvester.OnFinish() + pi.setHarvesterStopped(status) } func (pi *prospectorInfo) getSnapshot() *core.Snapshot { - return pi.harvester.Snapshot() + return pi.harvester.Snapshot() } func (pi *prospectorInfo) setHarvesterStopped(status *harvester.HarvesterFinish) { - pi.running = false - pi.finish_offset = status.Last_Offset - if status.Error != nil { - pi.status = Status_Failed - pi.err = status.Error - } - pi.harvester = nil + pi.running = false + pi.finish_offset = status.Last_Offset + if status.Error != nil { + pi.status = Status_Failed + pi.err = status.Error + } + pi.harvester = nil } func (pi *prospectorInfo) update(fileinfo os.FileInfo, iteration uint32) { - if fileinfo != nil { - // Allow identity to replace itself with a new identity (this allows a FileState to promote itself to a FileInfo) - pi.identity.Update(fileinfo, &pi.identity) - } + if fileinfo != nil { + // Allow identity to replace itself with a new identity (this allows a FileState to promote itself to a FileInfo) + pi.identity.Update(fileinfo, &pi.identity) + } - pi.last_seen = iteration + pi.last_seen = iteration } diff --git a/src/lc-lib/prospector/logging.go b/src/lc-lib/prospector/logging.go index 8688c0e8..29213c62 100644 --- a/src/lc-lib/prospector/logging.go +++ b/src/lc-lib/prospector/logging.go @@ -12,7 +12,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package prospector @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("prospector") + log = logging.MustGetLogger("prospector") } diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index ad7f7339..dd0d4e51 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -20,472 +20,472 @@ package prospector import ( - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "github.com/driskell/log-courier/src/lc-lib/harvester" - "github.com/driskell/log-courier/src/lc-lib/registrar" - "github.com/driskell/log-courier/src/lc-lib/spooler" - "os" - "path/filepath" - "time" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/spooler" + "os" + "path/filepath" + "time" ) type Prospector struct { - core.PipelineSegment - core.PipelineConfigReceiver - core.PipelineSnapshotProvider - - config *core.Config - prospectorindex map[string]*prospectorInfo - prospectors map[*prospectorInfo]*prospectorInfo - from_beginning bool - iteration uint32 - lastscan time.Time - registrar *registrar.Registrar - registrar_spool *registrar.RegistrarEventSpool - snapshot_chan chan interface{} - snapshot_sink chan []*core.Snapshot - - output chan<- *core.EventDescriptor + core.PipelineSegment + core.PipelineConfigReceiver + core.PipelineSnapshotProvider + + config *core.Config + prospectorindex map[string]*prospectorInfo + prospectors map[*prospectorInfo]*prospectorInfo + from_beginning bool + iteration uint32 + lastscan time.Time + registrar *registrar.Registrar + registrar_spool *registrar.RegistrarEventSpool + snapshot_chan chan interface{} + snapshot_sink chan []*core.Snapshot + + output chan<- *core.EventDescriptor } func NewProspector(pipeline *core.Pipeline, config *core.Config, from_beginning bool, registrar_imp *registrar.Registrar, spooler_imp *spooler.Spooler) (*Prospector, error) { - ret := &Prospector{ - config: config, - prospectorindex: make(map[string]*prospectorInfo), - prospectors: make(map[*prospectorInfo]*prospectorInfo), - from_beginning: from_beginning, - registrar: registrar_imp, - registrar_spool: registrar_imp.Connect(), - snapshot_chan: make(chan interface{}), - snapshot_sink: make(chan []*core.Snapshot), - output: spooler_imp.Connect(), - } - - if err := ret.init(); err != nil { - return nil, err - } - - pipeline.Register(ret) - - return ret, nil + ret := &Prospector{ + config: config, + prospectorindex: make(map[string]*prospectorInfo), + prospectors: make(map[*prospectorInfo]*prospectorInfo), + from_beginning: from_beginning, + registrar: registrar_imp, + registrar_spool: registrar_imp.Connect(), + snapshot_chan: make(chan interface{}), + snapshot_sink: make(chan []*core.Snapshot), + output: spooler_imp.Connect(), + } + + if err := ret.init(); err != nil { + return nil, err + } + + pipeline.Register(ret) + + return ret, nil } func (p *Prospector) init() (err error) { - var have_previous bool - if have_previous, err = p.registrar.LoadPrevious(p.loadCallback); err != nil { - return - } - - if have_previous { - // -from-beginning=false flag should only affect the very first run (no previous state) - p.from_beginning = true - - // Pre-populate prospectors with what we had previously - for _, v := range p.prospectorindex { - p.prospectors[v] = v - } - } - - return + var have_previous bool + if have_previous, err = p.registrar.LoadPrevious(p.loadCallback); err != nil { + return + } + + if have_previous { + // -from-beginning=false flag should only affect the very first run (no previous state) + p.from_beginning = true + + // Pre-populate prospectors with what we had previously + for _, v := range p.prospectorindex { + p.prospectors[v] = v + } + } + + return } func (p *Prospector) loadCallback(file string, state *registrar.FileState) (core.Stream, error) { - p.prospectorindex[file] = newProspectorInfoFromFileState(file, state) - return p.prospectorindex[file], nil + p.prospectorindex[file] = newProspectorInfoFromFileState(file, state) + return p.prospectorindex[file], nil } func (p *Prospector) Run() { - defer func() { - p.Done() - }() + defer func() { + p.Done() + }() ProspectLoop: - for { - newlastscan := time.Now() - p.iteration++ // Overflow is allowed - - for config_k, config := range p.config.Files { - for _, path := range config.Paths { - // Scan - flag false so new files always start at beginning - p.scan(path, &p.config.Files[config_k]) - } - } - - // We only obey *from_beginning (which is stored in this) on startup, - // afterwards we force from beginning - p.from_beginning = true - - // Clean up the prospector collections - for _, info := range p.prospectors { - if info.orphaned >= Orphaned_Maybe { - if !info.isRunning() { - delete(p.prospectors, info) - } - } else { - if info.last_seen >= p.iteration { - continue - } - delete(p.prospectorindex, info.file) - info.orphaned = Orphaned_Maybe - } - if info.orphaned == Orphaned_Maybe { - info.orphaned = Orphaned_Yes - p.registrar_spool.Add(registrar.NewDeletedEvent(info)) - } - } - - // Flush the accumulated registrar events - p.registrar_spool.Send() - - p.lastscan = newlastscan - - // Defer next scan for a bit - now := time.Now() - scan_deadline := now.Add(p.config.General.ProspectInterval) - - DelayLoop: - for { - select { - case <-time.After(scan_deadline.Sub(now)): - break DelayLoop - case <-p.OnShutdown(): - break ProspectLoop - case <-p.snapshot_chan: - p.handleSnapshot() - case config := <-p.OnConfig(): - p.config = config - } - - now = time.Now() - if now.After(scan_deadline) { - break - } - } - } - - // Send stop signal to all harvesters, then wait for them, for quick shutdown - for _, info := range p.prospectors { - info.stop() - } - for _, info := range p.prospectors { - info.wait() - } - - // Disconnect from the registrar - p.registrar_spool.Close() - - log.Info("Prospector exiting") + for { + newlastscan := time.Now() + p.iteration++ // Overflow is allowed + + for config_k, config := range p.config.Files { + for _, path := range config.Paths { + // Scan - flag false so new files always start at beginning + p.scan(path, &p.config.Files[config_k]) + } + } + + // We only obey *from_beginning (which is stored in this) on startup, + // afterwards we force from beginning + p.from_beginning = true + + // Clean up the prospector collections + for _, info := range p.prospectors { + if info.orphaned >= Orphaned_Maybe { + if !info.isRunning() { + delete(p.prospectors, info) + } + } else { + if info.last_seen >= p.iteration { + continue + } + delete(p.prospectorindex, info.file) + info.orphaned = Orphaned_Maybe + } + if info.orphaned == Orphaned_Maybe { + info.orphaned = Orphaned_Yes + p.registrar_spool.Add(registrar.NewDeletedEvent(info)) + } + } + + // Flush the accumulated registrar events + p.registrar_spool.Send() + + p.lastscan = newlastscan + + // Defer next scan for a bit + now := time.Now() + scan_deadline := now.Add(p.config.General.ProspectInterval) + + DelayLoop: + for { + select { + case <-time.After(scan_deadline.Sub(now)): + break DelayLoop + case <-p.OnShutdown(): + break ProspectLoop + case <-p.snapshot_chan: + p.handleSnapshot() + case config := <-p.OnConfig(): + p.config = config + } + + now = time.Now() + if now.After(scan_deadline) { + break + } + } + } + + // Send stop signal to all harvesters, then wait for them, for quick shutdown + for _, info := range p.prospectors { + info.stop() + } + for _, info := range p.prospectors { + info.wait() + } + + // Disconnect from the registrar + p.registrar_spool.Close() + + log.Info("Prospector exiting") } func (p *Prospector) scan(path string, config *core.FileConfig) { - // Evaluate the path as a wildcards/shell glob - matches, err := filepath.Glob(path) - if err != nil { - log.Error("glob(%s) failed: %v", path, err) - return - } - - // Check any matched files to see if we need to start a harvester - for _, file := range matches { - // Check the current info against our index - info, is_known := p.prospectorindex[file] - - // Stat the file, following any symlinks - // TODO: Low priority. Trigger loadFileId here for Windows instead of - // waiting for Harvester or Registrar to do it - fileinfo, err := os.Stat(file) - if err == nil { - if fileinfo.IsDir() { - err = newProspectorSkipError("Directory") - } - } - - if err != nil { - // Do we know this entry? - if is_known { - if info.status != Status_Invalid { - // The current entry is not an error, orphan it so we can log one - info.orphaned = Orphaned_Yes - } else if info.err != err { - // The error is different, remove this entry we'll log a new one - delete(p.prospectors, info) - } else { - // The same error occurred - don't log it again - info.update(nil, p.iteration) - continue - } - } - - // This is a new error - info = newProspectorInfoInvalid(file, err) - info.update(nil, p.iteration) - - // Print a friendly log message - if _, ok := err.(*ProspectorSkipError); ok { - log.Info("Skipping %s: %s", file, err) - } else { - log.Error("Error prospecting %s: %s", file, err) - } - - p.prospectors[info] = info - p.prospectorindex[file] = info - continue - } else if is_known && info.status == Status_Invalid { - // We have an error stub and we've just successfully got fileinfo - // Mark is_known so we treat as a new file still - is_known = false - } - - // Conditions for starting a new harvester: - // - file path hasn't been seen before - // - the file's inode or device changed - if !is_known { - // Check for dead time, but only if the file modification time is before the last scan started - // This ensures we don't skip genuine creations with dead times less than 10s - if previous, previousinfo := p.lookupFileIds(file, fileinfo); previous != "" { - // Symlinks could mean we see the same file twice - skip if we have - if previousinfo == nil { - p.flagDuplicateError(file, info) - continue - } - - // This file was simply renamed (known inode+dev) - link the same harvester channel as the old file - log.Info("File rename was detected: %s -> %s", previous, file) - info = previousinfo - info.file = file - - p.registrar_spool.Add(registrar.NewRenamedEvent(info, file)) - } else { - // This is a new entry - info = newProspectorInfoFromFileInfo(file, fileinfo) - - if fileinfo.ModTime().Before(p.lastscan) && time.Since(fileinfo.ModTime()) > config.DeadTime { - // Old file, skip it, but push offset of file size so we start from the end if this file changes and needs picking up - log.Info("Skipping file (older than dead time of %v): %s", config.DeadTime, file) - - // Store the offset that we should resume from if we notice a modification - info.finish_offset = fileinfo.Size() - p.registrar_spool.Add(registrar.NewDiscoverEvent(info, file, fileinfo.Size(), fileinfo)) - } else { - // Process new file - log.Info("Launching harvester on new file: %s", file) - p.startHarvester(info, config) - } - } - - // Store the new entry - p.prospectors[info] = info - } else { - if !info.identity.SameAs(fileinfo) { - // Keep the old file in case we find it again shortly - info.orphaned = Orphaned_Maybe - - if previous, previousinfo := p.lookupFileIds(file, fileinfo); previous != "" { - // Symlinks could mean we see the same file twice - skip if we have - if previousinfo == nil { - p.flagDuplicateError(file, nil) - continue - } - - // This file was renamed from another file we know - link the same harvester channel as the old file - log.Info("File rename was detected: %s -> %s", previous, file) - info = previousinfo - info.file = file - - p.registrar_spool.Add(registrar.NewRenamedEvent(info, file)) - } else { - // File is not the same file we saw previously, it must have rotated and is a new file - log.Info("Launching harvester on rotated file: %s", file) - - // Forget about the previous harvester and let it continue on the old file - so start a new channel to use with the new harvester - info = newProspectorInfoFromFileInfo(file, fileinfo) - - // Process new file - p.startHarvester(info, config) - } - - // Store it - p.prospectors[info] = info - } - } - - // Resume stopped harvesters - resume := !info.isRunning() - if resume { - if info.status == Status_Resume { - // This is a filestate that was saved, resume the harvester - log.Info("Resuming harvester on a previously harvested file: %s", file) - } else if info.status == Status_Failed { - // Last attempt we failed to start, try again - log.Info("Attempting to restart failed harvester: %s", file) - } else if info.identity.Stat().ModTime() != fileinfo.ModTime() { - // Resume harvesting of an old file we've stopped harvesting from - log.Info("Resuming harvester on an old file that was just modified: %s", file) - } else { - resume = false - } - } - - info.update(fileinfo, p.iteration) - - if resume { - p.startHarvesterWithOffset(info, config, info.finish_offset) - } - - p.prospectorindex[file] = info - } // for each file matched by the glob + // Evaluate the path as a wildcards/shell glob + matches, err := filepath.Glob(path) + if err != nil { + log.Error("glob(%s) failed: %v", path, err) + return + } + + // Check any matched files to see if we need to start a harvester + for _, file := range matches { + // Check the current info against our index + info, is_known := p.prospectorindex[file] + + // Stat the file, following any symlinks + // TODO: Low priority. Trigger loadFileId here for Windows instead of + // waiting for Harvester or Registrar to do it + fileinfo, err := os.Stat(file) + if err == nil { + if fileinfo.IsDir() { + err = newProspectorSkipError("Directory") + } + } + + if err != nil { + // Do we know this entry? + if is_known { + if info.status != Status_Invalid { + // The current entry is not an error, orphan it so we can log one + info.orphaned = Orphaned_Yes + } else if info.err != err { + // The error is different, remove this entry we'll log a new one + delete(p.prospectors, info) + } else { + // The same error occurred - don't log it again + info.update(nil, p.iteration) + continue + } + } + + // This is a new error + info = newProspectorInfoInvalid(file, err) + info.update(nil, p.iteration) + + // Print a friendly log message + if _, ok := err.(*ProspectorSkipError); ok { + log.Info("Skipping %s: %s", file, err) + } else { + log.Error("Error prospecting %s: %s", file, err) + } + + p.prospectors[info] = info + p.prospectorindex[file] = info + continue + } else if is_known && info.status == Status_Invalid { + // We have an error stub and we've just successfully got fileinfo + // Mark is_known so we treat as a new file still + is_known = false + } + + // Conditions for starting a new harvester: + // - file path hasn't been seen before + // - the file's inode or device changed + if !is_known { + // Check for dead time, but only if the file modification time is before the last scan started + // This ensures we don't skip genuine creations with dead times less than 10s + if previous, previousinfo := p.lookupFileIds(file, fileinfo); previous != "" { + // Symlinks could mean we see the same file twice - skip if we have + if previousinfo == nil { + p.flagDuplicateError(file, info) + continue + } + + // This file was simply renamed (known inode+dev) - link the same harvester channel as the old file + log.Info("File rename was detected: %s -> %s", previous, file) + info = previousinfo + info.file = file + + p.registrar_spool.Add(registrar.NewRenamedEvent(info, file)) + } else { + // This is a new entry + info = newProspectorInfoFromFileInfo(file, fileinfo) + + if fileinfo.ModTime().Before(p.lastscan) && time.Since(fileinfo.ModTime()) > config.DeadTime { + // Old file, skip it, but push offset of file size so we start from the end if this file changes and needs picking up + log.Info("Skipping file (older than dead time of %v): %s", config.DeadTime, file) + + // Store the offset that we should resume from if we notice a modification + info.finish_offset = fileinfo.Size() + p.registrar_spool.Add(registrar.NewDiscoverEvent(info, file, fileinfo.Size(), fileinfo)) + } else { + // Process new file + log.Info("Launching harvester on new file: %s", file) + p.startHarvester(info, config) + } + } + + // Store the new entry + p.prospectors[info] = info + } else { + if !info.identity.SameAs(fileinfo) { + // Keep the old file in case we find it again shortly + info.orphaned = Orphaned_Maybe + + if previous, previousinfo := p.lookupFileIds(file, fileinfo); previous != "" { + // Symlinks could mean we see the same file twice - skip if we have + if previousinfo == nil { + p.flagDuplicateError(file, nil) + continue + } + + // This file was renamed from another file we know - link the same harvester channel as the old file + log.Info("File rename was detected: %s -> %s", previous, file) + info = previousinfo + info.file = file + + p.registrar_spool.Add(registrar.NewRenamedEvent(info, file)) + } else { + // File is not the same file we saw previously, it must have rotated and is a new file + log.Info("Launching harvester on rotated file: %s", file) + + // Forget about the previous harvester and let it continue on the old file - so start a new channel to use with the new harvester + info = newProspectorInfoFromFileInfo(file, fileinfo) + + // Process new file + p.startHarvester(info, config) + } + + // Store it + p.prospectors[info] = info + } + } + + // Resume stopped harvesters + resume := !info.isRunning() + if resume { + if info.status == Status_Resume { + // This is a filestate that was saved, resume the harvester + log.Info("Resuming harvester on a previously harvested file: %s", file) + } else if info.status == Status_Failed { + // Last attempt we failed to start, try again + log.Info("Attempting to restart failed harvester: %s", file) + } else if info.identity.Stat().ModTime() != fileinfo.ModTime() { + // Resume harvesting of an old file we've stopped harvesting from + log.Info("Resuming harvester on an old file that was just modified: %s", file) + } else { + resume = false + } + } + + info.update(fileinfo, p.iteration) + + if resume { + p.startHarvesterWithOffset(info, config, info.finish_offset) + } + + p.prospectorindex[file] = info + } // for each file matched by the glob } func (p *Prospector) flagDuplicateError(file string, info *prospectorInfo) { - // Have we already logged this error? - if info != nil { - if info.status == Status_Invalid { - if skip_err, ok := info.err.(*ProspectorSkipError); ok && skip_err.message == "Duplicate" { - return - } - } - - // Remove the old info - delete(p.prospectors, info) - } - - // Flag duplicate error and save it - info = newProspectorInfoInvalid(file, newProspectorSkipError("Duplicate")) - info.update(nil, p.iteration) - p.prospectors[info] = info - p.prospectorindex[file] = info + // Have we already logged this error? + if info != nil { + if info.status == Status_Invalid { + if skip_err, ok := info.err.(*ProspectorSkipError); ok && skip_err.message == "Duplicate" { + return + } + } + + // Remove the old info + delete(p.prospectors, info) + } + + // Flag duplicate error and save it + info = newProspectorInfoInvalid(file, newProspectorSkipError("Duplicate")) + info.update(nil, p.iteration) + p.prospectors[info] = info + p.prospectorindex[file] = info } func (p *Prospector) startHarvester(info *prospectorInfo, fileconfig *core.FileConfig) { - var offset int64 + var offset int64 - if p.from_beginning { - offset = 0 - } else { - offset = info.identity.Stat().Size() - } + if p.from_beginning { + offset = 0 + } else { + offset = info.identity.Stat().Size() + } - // Send a new file event to allow registrar to begin persisting for this harvester - p.registrar_spool.Add(registrar.NewDiscoverEvent(info, info.file, offset, info.identity.Stat())) + // Send a new file event to allow registrar to begin persisting for this harvester + p.registrar_spool.Add(registrar.NewDiscoverEvent(info, info.file, offset, info.identity.Stat())) - p.startHarvesterWithOffset(info, fileconfig, offset) + p.startHarvesterWithOffset(info, fileconfig, offset) } func (p *Prospector) startHarvesterWithOffset(info *prospectorInfo, fileconfig *core.FileConfig, offset int64) { - // TODO - hook in a shutdown channel - info.harvester = harvester.NewHarvester(info, p.config, &fileconfig.StreamConfig, offset) - info.running = true - info.status = Status_Ok - info.harvester.Start(p.output) + // TODO - hook in a shutdown channel + info.harvester = harvester.NewHarvester(info, p.config, &fileconfig.StreamConfig, offset) + info.running = true + info.status = Status_Ok + info.harvester.Start(p.output) } func (p *Prospector) lookupFileIds(file string, info os.FileInfo) (string, *prospectorInfo) { - for _, ki := range p.prospectors { - if ki.status == Status_Invalid { - // Don't consider error placeholders - continue - } - if ki.orphaned == Orphaned_No && ki.file == file { - // We already know the prospector info for this file doesn't match, so don't check again - continue - } - if ki.identity.SameAs(info) { - // Already seen? - if ki.last_seen == p.iteration { - return ki.file, nil - } - - // Found previous information, remove it and return it (it will be added again) - delete(p.prospectors, ki) - if ki.orphaned == Orphaned_No { - delete(p.prospectorindex, ki.file) - } else { - ki.orphaned = Orphaned_No - } - return ki.file, ki - } - } - - return "", nil + for _, ki := range p.prospectors { + if ki.status == Status_Invalid { + // Don't consider error placeholders + continue + } + if ki.orphaned == Orphaned_No && ki.file == file { + // We already know the prospector info for this file doesn't match, so don't check again + continue + } + if ki.identity.SameAs(info) { + // Already seen? + if ki.last_seen == p.iteration { + return ki.file, nil + } + + // Found previous information, remove it and return it (it will be added again) + delete(p.prospectors, ki) + if ki.orphaned == Orphaned_No { + delete(p.prospectorindex, ki.file) + } else { + ki.orphaned = Orphaned_No + } + return ki.file, ki + } + } + + return "", nil } func (p *Prospector) Snapshot() []*core.Snapshot { - select { - case p.snapshot_chan <- 1: - // Timeout after 5 seconds - case <-time.After(5 * time.Second): - ret := core.NewSnapshot("Prospector") - ret.AddEntry("Error", "Timeout") - return []*core.Snapshot{ret} - } - - return <-p.snapshot_sink + select { + case p.snapshot_chan <- 1: + // Timeout after 5 seconds + case <-time.After(5 * time.Second): + ret := core.NewSnapshot("Prospector") + ret.AddEntry("Error", "Timeout") + return []*core.Snapshot{ret} + } + + return <-p.snapshot_sink } func (p *Prospector) handleSnapshot() { - snapshots := make([]*core.Snapshot, 1) + snapshots := make([]*core.Snapshot, 1) - snapshots[0] = core.NewSnapshot("Prospector") - snapshots[0].AddEntry("Watched files", len(p.prospectorindex)) - snapshots[0].AddEntry("Active states", len(p.prospectors)) + snapshots[0] = core.NewSnapshot("Prospector") + snapshots[0].AddEntry("Watched files", len(p.prospectorindex)) + snapshots[0].AddEntry("Active states", len(p.prospectors)) - for _, info := range p.prospectorindex { - snapshots = append(snapshots, p.snapshotInfo(info)) - } + for _, info := range p.prospectorindex { + snapshots = append(snapshots, p.snapshotInfo(info)) + } - for _, info := range p.prospectors { - if info.orphaned == Orphaned_No { - continue - } - snapshots = append(snapshots, p.snapshotInfo(info)) - } + for _, info := range p.prospectors { + if info.orphaned == Orphaned_No { + continue + } + snapshots = append(snapshots, p.snapshotInfo(info)) + } - p.snapshot_sink <- snapshots + p.snapshot_sink <- snapshots } func (p *Prospector) snapshotInfo(info *prospectorInfo) *core.Snapshot { - var extra string - var status string - - if info.file == "-" { - extra = "Stdin / " - } else { - switch (info.orphaned) { - case Orphaned_Maybe: - extra = "Orphan? / " - case Orphaned_Yes: - extra = "Orphan / " - } - } - - switch (info.status) { - default: - if info.running { - status = "Running" - } else { - status = "Dead" - } - case Status_Resume: - status = "Resuming" - case Status_Failed: - status = fmt.Sprintf("Failed: %s", info.err) - case Status_Invalid: - if _, ok := info.err.(*ProspectorSkipError); ok { - status = fmt.Sprintf("Skipped (%s)", info.err) - } else { - status = fmt.Sprintf("Error: %s", info.err) - } - } - - snap := core.NewSnapshot(fmt.Sprintf("\"State: %s (%s%p)\"", info.file, extra, info)) - snap.AddEntry("Status", status) - - if info.running { - if sub_snap := info.getSnapshot(); sub_snap != nil { - snap.AddSub(sub_snap) - } - } - - return snap + var extra string + var status string + + if info.file == "-" { + extra = "Stdin / " + } else { + switch info.orphaned { + case Orphaned_Maybe: + extra = "Orphan? / " + case Orphaned_Yes: + extra = "Orphan / " + } + } + + switch info.status { + default: + if info.running { + status = "Running" + } else { + status = "Dead" + } + case Status_Resume: + status = "Resuming" + case Status_Failed: + status = fmt.Sprintf("Failed: %s", info.err) + case Status_Invalid: + if _, ok := info.err.(*ProspectorSkipError); ok { + status = fmt.Sprintf("Skipped (%s)", info.err) + } else { + status = fmt.Sprintf("Error: %s", info.err) + } + } + + snap := core.NewSnapshot(fmt.Sprintf("\"State: %s (%s%p)\"", info.file, extra, info)) + snap.AddEntry("Status", status) + + if info.running { + if sub_snap := info.getSnapshot(); sub_snap != nil { + snap.AddSub(sub_snap) + } + } + + return snap } diff --git a/src/lc-lib/prospector/snapshot.go b/src/lc-lib/prospector/snapshot.go index d6f11b02..2cc2b151 100644 --- a/src/lc-lib/prospector/snapshot.go +++ b/src/lc-lib/prospector/snapshot.go @@ -12,7 +12,7 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package prospector diff --git a/src/lc-lib/publisher/logging.go b/src/lc-lib/publisher/logging.go index c96d9ea6..12c857b3 100644 --- a/src/lc-lib/publisher/logging.go +++ b/src/lc-lib/publisher/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("publisher") + log = logging.MustGetLogger("publisher") } diff --git a/src/lc-lib/publisher/pending_payload.go b/src/lc-lib/publisher/pending_payload.go index 170bb2db..a01f8fa5 100644 --- a/src/lc-lib/publisher/pending_payload.go +++ b/src/lc-lib/publisher/pending_payload.go @@ -17,113 +17,113 @@ package publisher import ( - "bytes" - "compress/zlib" - "encoding/binary" - "errors" - "github.com/driskell/log-courier/src/lc-lib/core" - "time" + "bytes" + "compress/zlib" + "encoding/binary" + "errors" + "github.com/driskell/log-courier/src/lc-lib/core" + "time" ) var ( - ErrPayloadCorrupt = errors.New("Payload is corrupt") + ErrPayloadCorrupt = errors.New("Payload is corrupt") ) type pendingPayload struct { - next *pendingPayload - nonce string - events []*core.EventDescriptor - last_sequence int - sequence_len int - ack_events int - processed int - payload []byte - timeout time.Time + next *pendingPayload + nonce string + events []*core.EventDescriptor + last_sequence int + sequence_len int + ack_events int + processed int + payload []byte + timeout time.Time } func newPendingPayload(events []*core.EventDescriptor, nonce string, timeout time.Duration) (*pendingPayload, error) { - payload := &pendingPayload{ - events: events, - nonce: nonce, - timeout: time.Now().Add(timeout), - } + payload := &pendingPayload{ + events: events, + nonce: nonce, + timeout: time.Now().Add(timeout), + } - if err := payload.Generate(); err != nil { - return nil, err - } + if err := payload.Generate(); err != nil { + return nil, err + } - return payload, nil + return payload, nil } func (pp *pendingPayload) Generate() (err error) { - var buffer bytes.Buffer - - // Assertion - if len(pp.events) == 0 { - return ErrPayloadCorrupt - } - - // Begin with the nonce - if _, err = buffer.Write([]byte(pp.nonce)[0:16]); err != nil { - return - } - - var compressor *zlib.Writer - if compressor, err = zlib.NewWriterLevel(&buffer, 3); err != nil { - return - } - - // Append all the events - for _, event := range pp.events[pp.ack_events:] { - if err = binary.Write(compressor, binary.BigEndian, uint32(len(event.Event))); err != nil { - return - } - if _, err = compressor.Write(event.Event); err != nil { - return - } - } - - compressor.Close() - - pp.payload = buffer.Bytes() - pp.last_sequence = 0 - pp.sequence_len = len(pp.events) - pp.ack_events - - return + var buffer bytes.Buffer + + // Assertion + if len(pp.events) == 0 { + return ErrPayloadCorrupt + } + + // Begin with the nonce + if _, err = buffer.Write([]byte(pp.nonce)[0:16]); err != nil { + return + } + + var compressor *zlib.Writer + if compressor, err = zlib.NewWriterLevel(&buffer, 3); err != nil { + return + } + + // Append all the events + for _, event := range pp.events[pp.ack_events:] { + if err = binary.Write(compressor, binary.BigEndian, uint32(len(event.Event))); err != nil { + return + } + if _, err = compressor.Write(event.Event); err != nil { + return + } + } + + compressor.Close() + + pp.payload = buffer.Bytes() + pp.last_sequence = 0 + pp.sequence_len = len(pp.events) - pp.ack_events + + return } func (pp *pendingPayload) Ack(sequence int) (int, bool) { - if sequence <= pp.last_sequence { - // No change - return 0, false - } else if sequence >= pp.sequence_len { - // Full ACK - lines := pp.sequence_len - pp.last_sequence - pp.ack_events = len(pp.events) - pp.last_sequence = sequence - pp.payload = nil - return lines, true - } - - lines := sequence - pp.last_sequence - pp.ack_events += lines - pp.last_sequence = sequence - pp.payload = nil - return lines, false + if sequence <= pp.last_sequence { + // No change + return 0, false + } else if sequence >= pp.sequence_len { + // Full ACK + lines := pp.sequence_len - pp.last_sequence + pp.ack_events = len(pp.events) + pp.last_sequence = sequence + pp.payload = nil + return lines, true + } + + lines := sequence - pp.last_sequence + pp.ack_events += lines + pp.last_sequence = sequence + pp.payload = nil + return lines, false } func (pp *pendingPayload) HasAck() bool { - return pp.ack_events != 0 + return pp.ack_events != 0 } func (pp *pendingPayload) Complete() bool { - return len(pp.events) == 0 + return len(pp.events) == 0 } func (pp *pendingPayload) Rollup() []*core.EventDescriptor { - pp.processed += pp.ack_events - rollup := pp.events[:pp.ack_events] - pp.events = pp.events[pp.ack_events:] - pp.ack_events = 0 - return rollup + pp.processed += pp.ack_events + rollup := pp.events[:pp.ack_events] + pp.events = pp.events[pp.ack_events:] + pp.ack_events = 0 + return rollup } diff --git a/src/lc-lib/publisher/pending_payload_test.go b/src/lc-lib/publisher/pending_payload_test.go index 1ab0edb2..6a56eaaa 100644 --- a/src/lc-lib/publisher/pending_payload_test.go +++ b/src/lc-lib/publisher/pending_payload_test.go @@ -12,14 +12,14 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package publisher import ( "github.com/driskell/log-courier/src/lc-lib/core" - "time" "testing" + "time" ) const ( diff --git a/src/lc-lib/publisher/publisher.go b/src/lc-lib/publisher/publisher.go index 5c42ebad..55492f17 100644 --- a/src/lc-lib/publisher/publisher.go +++ b/src/lc-lib/publisher/publisher.go @@ -20,44 +20,44 @@ package publisher import ( - "bytes" - "encoding/binary" - "errors" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "github.com/driskell/log-courier/src/lc-lib/registrar" - "math/rand" - "sync" - "time" + "bytes" + "encoding/binary" + "errors" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "math/rand" + "sync" + "time" ) var ( - ErrNetworkTimeout = errors.New("Server did not respond within network timeout") - ErrNetworkPing = errors.New("Server did not respond to PING") + ErrNetworkTimeout = errors.New("Server did not respond within network timeout") + ErrNetworkPing = errors.New("Server did not respond to PING") ) const ( - // TODO(driskell): Make the idle timeout configurable like the network timeout is? - keepalive_timeout time.Duration = 900 * time.Second + // TODO(driskell): Make the idle timeout configurable like the network timeout is? + keepalive_timeout time.Duration = 900 * time.Second ) const ( - Status_Disconnected = iota - Status_Connected - Status_Reconnecting + Status_Disconnected = iota + Status_Connected + Status_Reconnecting ) type EventSpool interface { - Close() - Add(registrar.RegistrarEvent) - Send() + Close() + Add(registrar.RegistrarEvent) + Send() } type NullEventSpool struct { } func newNullEventSpool() EventSpool { - return &NullEventSpool{} + return &NullEventSpool{} } func (s *NullEventSpool) Close() { @@ -70,606 +70,606 @@ func (s *NullEventSpool) Send() { } type Publisher struct { - core.PipelineSegment - core.PipelineConfigReceiver - core.PipelineSnapshotProvider - - sync.RWMutex - - config *core.NetworkConfig - transport core.Transport - status int - can_send <-chan int - pending_ping bool - pending_payloads map[string]*pendingPayload - first_payload *pendingPayload - last_payload *pendingPayload - num_payloads int64 - out_of_sync int - input chan []*core.EventDescriptor - registrar_spool EventSpool - shutdown bool - line_count int64 - retry_count int64 - seconds_no_ack int - - timeout_count int64 - line_speed float64 - last_line_count int64 - last_retry_count int64 - last_measurement time.Time + core.PipelineSegment + core.PipelineConfigReceiver + core.PipelineSnapshotProvider + + sync.RWMutex + + config *core.NetworkConfig + transport core.Transport + status int + can_send <-chan int + pending_ping bool + pending_payloads map[string]*pendingPayload + first_payload *pendingPayload + last_payload *pendingPayload + num_payloads int64 + out_of_sync int + input chan []*core.EventDescriptor + registrar_spool EventSpool + shutdown bool + line_count int64 + retry_count int64 + seconds_no_ack int + + timeout_count int64 + line_speed float64 + last_line_count int64 + last_retry_count int64 + last_measurement time.Time } func NewPublisher(pipeline *core.Pipeline, config *core.NetworkConfig, registrar *registrar.Registrar) (*Publisher, error) { - ret := &Publisher{ - config: config, - input: make(chan []*core.EventDescriptor, 1), - } + ret := &Publisher{ + config: config, + input: make(chan []*core.EventDescriptor, 1), + } - if registrar == nil { - ret.registrar_spool = newNullEventSpool() - } else { - ret.registrar_spool = registrar.Connect() - } + if registrar == nil { + ret.registrar_spool = newNullEventSpool() + } else { + ret.registrar_spool = registrar.Connect() + } - if err := ret.init(); err != nil { - return nil, err - } + if err := ret.init(); err != nil { + return nil, err + } - pipeline.Register(ret) + pipeline.Register(ret) - return ret, nil + return ret, nil } func (p *Publisher) init() error { - var err error + var err error - p.pending_payloads = make(map[string]*pendingPayload) + p.pending_payloads = make(map[string]*pendingPayload) - // Set up the selected transport - if err = p.loadTransport(); err != nil { - return err - } + // Set up the selected transport + if err = p.loadTransport(); err != nil { + return err + } - return nil + return nil } func (p *Publisher) loadTransport() error { - transport, err := p.config.TransportFactory.NewTransport(p.config) - if err != nil { - return err - } + transport, err := p.config.TransportFactory.NewTransport(p.config) + if err != nil { + return err + } - p.transport = transport + p.transport = transport - return nil + return nil } func (p *Publisher) Connect() chan<- []*core.EventDescriptor { - return p.input + return p.input } func (p *Publisher) Run() { - defer func() { - p.Done() - }() - - var input_toggle <-chan []*core.EventDescriptor - var retry_payload *pendingPayload - var err error - var reload int - - timer := time.NewTimer(keepalive_timeout) - stats_timer := time.NewTimer(time.Second) - - control_signal := p.OnShutdown() - delay_shutdown := func() { - // Flag shutdown for when we finish pending payloads - // TODO: Persist pending payloads and resume? Quicker shutdown - log.Warning("Delaying shutdown to wait for pending responses from the server") - control_signal = nil - p.shutdown = true - p.can_send = nil - input_toggle = nil - } + defer func() { + p.Done() + }() + + var input_toggle <-chan []*core.EventDescriptor + var retry_payload *pendingPayload + var err error + var reload int + + timer := time.NewTimer(keepalive_timeout) + stats_timer := time.NewTimer(time.Second) + + control_signal := p.OnShutdown() + delay_shutdown := func() { + // Flag shutdown for when we finish pending payloads + // TODO: Persist pending payloads and resume? Quicker shutdown + log.Warning("Delaying shutdown to wait for pending responses from the server") + control_signal = nil + p.shutdown = true + p.can_send = nil + input_toggle = nil + } PublishLoop: - for { - // Do we need to reload transport? - if reload == core.Reload_Transport { - // Shutdown and reload transport - p.transport.Shutdown() - - if err = p.loadTransport(); err != nil { - log.Error("The new transport configuration failed to apply: %s", err) - } - - reload = core.Reload_None - } else if reload != core.Reload_None { - reload = core.Reload_None - } - - if err = p.transport.Init(); err != nil { - log.Error("Transport init failed: %s", err) - - now := time.Now() - reconnect_due := now.Add(p.config.Reconnect) - - ReconnectTimeLoop: - for { - - select { - case <-time.After(reconnect_due.Sub(now)): - break ReconnectTimeLoop - case <-control_signal: - // TODO: Persist pending payloads and resume? Quicker shutdown - if p.num_payloads == 0 { - break PublishLoop - } - - delay_shutdown() - case config := <-p.OnConfig(): - // Apply and check for changes - reload = p.reloadConfig(&config.Network) - - // If a change and no pending payloads, process immediately - if reload != core.Reload_None && p.num_payloads == 0 { - break ReconnectTimeLoop - } - } - - now = time.Now() - if now.After(reconnect_due) { - break - } - } - - continue - } - - p.Lock() - p.status = Status_Connected - p.Unlock() - - timer.Reset(keepalive_timeout) - stats_timer.Reset(time.Second) - - p.pending_ping = false - input_toggle = nil - p.can_send = p.transport.CanSend() - - SelectLoop: - for { - select { - case <-p.can_send: - // Resend payloads from full retry first - if retry_payload != nil { - // Do we need to regenerate the payload? - if retry_payload.payload == nil { - if err = retry_payload.Generate(); err != nil { - break SelectLoop - } - } - - // Reset timeout - retry_payload.timeout = time.Now().Add(p.config.Timeout) - - log.Debug("Send now open: Retrying next payload") - - // Send the payload again - if err = p.transport.Write("JDAT", retry_payload.payload); err != nil { - break SelectLoop - } - - // Expect an ACK within network timeout if this is the first of the retries - if p.first_payload == retry_payload { - timer.Reset(p.config.Timeout) - } - - // Move to next non-empty payload - for { - retry_payload = retry_payload.next - if retry_payload == nil || retry_payload.ack_events != len(retry_payload.events) { - break - } - } - - break - } else if p.out_of_sync != 0 { - var resent bool - if resent, err = p.checkResend(); err != nil { - break SelectLoop - } else if resent { - log.Debug("Send now open: Resent a timed out payload") - // Expect an ACK within network timeout - timer.Reset(p.config.Timeout) - break - } - } - - // No pending payloads, are we shutting down? Skip if so - if p.shutdown { - break - } - - log.Debug("Send now open: Awaiting events for new payload") - - // Enable event wait - input_toggle = p.input - case events := <-input_toggle: - log.Debug("Sending new payload of %d events", len(events)) - - // Send - if err = p.sendNewPayload(events); err != nil { - break SelectLoop - } - - // Wait for send signal again - input_toggle = nil - - if p.num_payloads >= p.config.MaxPendingPayloads { - // Too many pending payloads, disable send temporarily - p.can_send = nil - log.Debug("Pending payload limit reached") - } - - // Expect an ACK within network timeout if this is first payload after idle - // Otherwise leave the previous timer - if p.num_payloads == 1 { - timer.Reset(p.config.Timeout) - } - case data := <-p.transport.Read(): - var signature, message []byte - - // Error? Or data? - switch data.(type) { - case error: - err = data.(error) - break SelectLoop - default: - signature = data.([][]byte)[0] - message = data.([][]byte)[1] - } - - switch { - case bytes.Compare(signature, []byte("PONG")) == 0: - if err = p.processPong(message); err != nil { - break SelectLoop - } - case bytes.Compare(signature, []byte("ACKN")) == 0: - if err = p.processAck(message); err != nil { - break SelectLoop - } - default: - err = fmt.Errorf("Unknown message received: % X", signature) - break SelectLoop - } - - // If no more pending payloads, set keepalive, otherwise reset to network timeout - if p.num_payloads == 0 { - // Handle shutdown - if p.shutdown { - break PublishLoop - } else if reload != core.Reload_None { - break SelectLoop - } - log.Debug("No more pending payloads, entering idle") - timer.Reset(keepalive_timeout) - } else { - log.Debug("%d payloads still pending, resetting timeout", p.num_payloads) - timer.Reset(p.config.Timeout) - } - case <-timer.C: - // If we have pending payloads, we should've received something by now - if p.num_payloads != 0 { - err = ErrNetworkTimeout - break SelectLoop - } - - // If we haven't received a PONG yet this is a timeout - if p.pending_ping { - err = ErrNetworkPing - break SelectLoop - } - - log.Debug("Idle timeout: sending PING") - - // Send a ping and expect a pong back (eventually) - // If we receive an ACK first, that's fine we'll reset timer - // But after those ACKs we should get a PONG - if err = p.transport.Write("PING", nil); err != nil { - break SelectLoop - } - - p.pending_ping = true - - // We may have just filled the send buffer - input_toggle = nil - - // Allow network timeout to receive something - timer.Reset(p.config.Timeout) - case <-control_signal: - // If no pending payloads, simply end - if p.num_payloads == 0 { - break PublishLoop - } - - delay_shutdown() - case config := <-p.OnConfig(): - // Apply and check for changes - reload = p.reloadConfig(&config.Network) - - // If a change and no pending payloads, process immediately - if reload != core.Reload_None && p.num_payloads == 0 { - break SelectLoop - } - - p.can_send = nil - case <-stats_timer.C: - p.updateStatistics(Status_Connected, nil) - stats_timer.Reset(time.Second) - } - } - - if err != nil { - // If we're shutting down and we hit a timeout and aren't out of sync - // We can then quit - as we'd be resending payloads anyway - if p.shutdown && p.out_of_sync == 0 { - log.Error("Transport error: %s", err) - break PublishLoop - } - - p.updateStatistics(Status_Reconnecting, err) - - // An error occurred, reconnect after timeout - log.Error("Transport error, will try again: %s", err) - time.Sleep(p.config.Reconnect) - } else { - log.Info("Reconnecting transport") - - p.updateStatistics(Status_Reconnecting, nil) - } - - retry_payload = p.first_payload - } - - p.transport.Shutdown() - - // Disconnect from registrar - p.registrar_spool.Close() - - log.Info("Publisher exiting") + for { + // Do we need to reload transport? + if reload == core.Reload_Transport { + // Shutdown and reload transport + p.transport.Shutdown() + + if err = p.loadTransport(); err != nil { + log.Error("The new transport configuration failed to apply: %s", err) + } + + reload = core.Reload_None + } else if reload != core.Reload_None { + reload = core.Reload_None + } + + if err = p.transport.Init(); err != nil { + log.Error("Transport init failed: %s", err) + + now := time.Now() + reconnect_due := now.Add(p.config.Reconnect) + + ReconnectTimeLoop: + for { + + select { + case <-time.After(reconnect_due.Sub(now)): + break ReconnectTimeLoop + case <-control_signal: + // TODO: Persist pending payloads and resume? Quicker shutdown + if p.num_payloads == 0 { + break PublishLoop + } + + delay_shutdown() + case config := <-p.OnConfig(): + // Apply and check for changes + reload = p.reloadConfig(&config.Network) + + // If a change and no pending payloads, process immediately + if reload != core.Reload_None && p.num_payloads == 0 { + break ReconnectTimeLoop + } + } + + now = time.Now() + if now.After(reconnect_due) { + break + } + } + + continue + } + + p.Lock() + p.status = Status_Connected + p.Unlock() + + timer.Reset(keepalive_timeout) + stats_timer.Reset(time.Second) + + p.pending_ping = false + input_toggle = nil + p.can_send = p.transport.CanSend() + + SelectLoop: + for { + select { + case <-p.can_send: + // Resend payloads from full retry first + if retry_payload != nil { + // Do we need to regenerate the payload? + if retry_payload.payload == nil { + if err = retry_payload.Generate(); err != nil { + break SelectLoop + } + } + + // Reset timeout + retry_payload.timeout = time.Now().Add(p.config.Timeout) + + log.Debug("Send now open: Retrying next payload") + + // Send the payload again + if err = p.transport.Write("JDAT", retry_payload.payload); err != nil { + break SelectLoop + } + + // Expect an ACK within network timeout if this is the first of the retries + if p.first_payload == retry_payload { + timer.Reset(p.config.Timeout) + } + + // Move to next non-empty payload + for { + retry_payload = retry_payload.next + if retry_payload == nil || retry_payload.ack_events != len(retry_payload.events) { + break + } + } + + break + } else if p.out_of_sync != 0 { + var resent bool + if resent, err = p.checkResend(); err != nil { + break SelectLoop + } else if resent { + log.Debug("Send now open: Resent a timed out payload") + // Expect an ACK within network timeout + timer.Reset(p.config.Timeout) + break + } + } + + // No pending payloads, are we shutting down? Skip if so + if p.shutdown { + break + } + + log.Debug("Send now open: Awaiting events for new payload") + + // Enable event wait + input_toggle = p.input + case events := <-input_toggle: + log.Debug("Sending new payload of %d events", len(events)) + + // Send + if err = p.sendNewPayload(events); err != nil { + break SelectLoop + } + + // Wait for send signal again + input_toggle = nil + + if p.num_payloads >= p.config.MaxPendingPayloads { + // Too many pending payloads, disable send temporarily + p.can_send = nil + log.Debug("Pending payload limit reached") + } + + // Expect an ACK within network timeout if this is first payload after idle + // Otherwise leave the previous timer + if p.num_payloads == 1 { + timer.Reset(p.config.Timeout) + } + case data := <-p.transport.Read(): + var signature, message []byte + + // Error? Or data? + switch data.(type) { + case error: + err = data.(error) + break SelectLoop + default: + signature = data.([][]byte)[0] + message = data.([][]byte)[1] + } + + switch { + case bytes.Compare(signature, []byte("PONG")) == 0: + if err = p.processPong(message); err != nil { + break SelectLoop + } + case bytes.Compare(signature, []byte("ACKN")) == 0: + if err = p.processAck(message); err != nil { + break SelectLoop + } + default: + err = fmt.Errorf("Unknown message received: % X", signature) + break SelectLoop + } + + // If no more pending payloads, set keepalive, otherwise reset to network timeout + if p.num_payloads == 0 { + // Handle shutdown + if p.shutdown { + break PublishLoop + } else if reload != core.Reload_None { + break SelectLoop + } + log.Debug("No more pending payloads, entering idle") + timer.Reset(keepalive_timeout) + } else { + log.Debug("%d payloads still pending, resetting timeout", p.num_payloads) + timer.Reset(p.config.Timeout) + } + case <-timer.C: + // If we have pending payloads, we should've received something by now + if p.num_payloads != 0 { + err = ErrNetworkTimeout + break SelectLoop + } + + // If we haven't received a PONG yet this is a timeout + if p.pending_ping { + err = ErrNetworkPing + break SelectLoop + } + + log.Debug("Idle timeout: sending PING") + + // Send a ping and expect a pong back (eventually) + // If we receive an ACK first, that's fine we'll reset timer + // But after those ACKs we should get a PONG + if err = p.transport.Write("PING", nil); err != nil { + break SelectLoop + } + + p.pending_ping = true + + // We may have just filled the send buffer + input_toggle = nil + + // Allow network timeout to receive something + timer.Reset(p.config.Timeout) + case <-control_signal: + // If no pending payloads, simply end + if p.num_payloads == 0 { + break PublishLoop + } + + delay_shutdown() + case config := <-p.OnConfig(): + // Apply and check for changes + reload = p.reloadConfig(&config.Network) + + // If a change and no pending payloads, process immediately + if reload != core.Reload_None && p.num_payloads == 0 { + break SelectLoop + } + + p.can_send = nil + case <-stats_timer.C: + p.updateStatistics(Status_Connected, nil) + stats_timer.Reset(time.Second) + } + } + + if err != nil { + // If we're shutting down and we hit a timeout and aren't out of sync + // We can then quit - as we'd be resending payloads anyway + if p.shutdown && p.out_of_sync == 0 { + log.Error("Transport error: %s", err) + break PublishLoop + } + + p.updateStatistics(Status_Reconnecting, err) + + // An error occurred, reconnect after timeout + log.Error("Transport error, will try again: %s", err) + time.Sleep(p.config.Reconnect) + } else { + log.Info("Reconnecting transport") + + p.updateStatistics(Status_Reconnecting, nil) + } + + retry_payload = p.first_payload + } + + p.transport.Shutdown() + + // Disconnect from registrar + p.registrar_spool.Close() + + log.Info("Publisher exiting") } func (p *Publisher) reloadConfig(new_config *core.NetworkConfig) int { - old_config := p.config - p.config = new_config - - // Transport reload will return whether we need a full reload or not - reload := p.transport.ReloadConfig(new_config) - if reload == core.Reload_Transport { - return core.Reload_Transport - } - - // Same servers? - if len(new_config.Servers) != len(old_config.Servers) { - return core.Reload_Servers - } - - for i := range new_config.Servers { - if new_config.Servers[i] != old_config.Servers[i] { - return core.Reload_Servers - } - } - - return reload + old_config := p.config + p.config = new_config + + // Transport reload will return whether we need a full reload or not + reload := p.transport.ReloadConfig(new_config) + if reload == core.Reload_Transport { + return core.Reload_Transport + } + + // Same servers? + if len(new_config.Servers) != len(old_config.Servers) { + return core.Reload_Servers + } + + for i := range new_config.Servers { + if new_config.Servers[i] != old_config.Servers[i] { + return core.Reload_Servers + } + } + + return reload } func (p *Publisher) updateStatistics(status int, err error) { - p.Lock() + p.Lock() - p.status = status + p.status = status - p.line_speed = core.CalculateSpeed(time.Since(p.last_measurement), p.line_speed, float64(p.line_count - p.last_line_count), &p.seconds_no_ack) + p.line_speed = core.CalculateSpeed(time.Since(p.last_measurement), p.line_speed, float64(p.line_count-p.last_line_count), &p.seconds_no_ack) - p.last_line_count = p.line_count - p.last_retry_count = p.retry_count - p.last_measurement = time.Now() + p.last_line_count = p.line_count + p.last_retry_count = p.retry_count + p.last_measurement = time.Now() - if err == ErrNetworkTimeout || err == ErrNetworkPing { - p.timeout_count++ - } + if err == ErrNetworkTimeout || err == ErrNetworkPing { + p.timeout_count++ + } - p.Unlock() + p.Unlock() } func (p *Publisher) checkResend() (bool, error) { - // We're out of sync (received ACKs for later payloads but not earlier ones) - // Check timeouts of earlier payloads and resend if necessary - if payload := p.first_payload; payload.timeout.Before(time.Now()) { - p.retry_count++ - - // Do we need to regenerate the payload? - if payload.payload == nil { - if err := payload.Generate(); err != nil { - return false, err - } - } - - // Update timeout - payload.timeout = time.Now().Add(p.config.Timeout) - - // Requeue the payload - p.first_payload = payload.next - payload.next = nil - p.last_payload.next = payload - p.last_payload = payload - - // Send the payload again - if err := p.transport.Write("JDAT", payload.payload); err != nil { - return false, err - } - - return true, nil - } - - return false, nil + // We're out of sync (received ACKs for later payloads but not earlier ones) + // Check timeouts of earlier payloads and resend if necessary + if payload := p.first_payload; payload.timeout.Before(time.Now()) { + p.retry_count++ + + // Do we need to regenerate the payload? + if payload.payload == nil { + if err := payload.Generate(); err != nil { + return false, err + } + } + + // Update timeout + payload.timeout = time.Now().Add(p.config.Timeout) + + // Requeue the payload + p.first_payload = payload.next + payload.next = nil + p.last_payload.next = payload + p.last_payload = payload + + // Send the payload again + if err := p.transport.Write("JDAT", payload.payload); err != nil { + return false, err + } + + return true, nil + } + + return false, nil } func (p *Publisher) generateNonce() string { - // This could maybe be made a bit more efficient - nonce := make([]byte, 16) - for i := 0; i < 16; i++ { - nonce[i] = byte(rand.Intn(255)) - } - return string(nonce) + // This could maybe be made a bit more efficient + nonce := make([]byte, 16) + for i := 0; i < 16; i++ { + nonce[i] = byte(rand.Intn(255)) + } + return string(nonce) } func (p *Publisher) sendNewPayload(events []*core.EventDescriptor) (err error) { - // Calculate a nonce - nonce := p.generateNonce() - for { - if _, found := p.pending_payloads[nonce]; !found { - break - } - // Collision - generate again - should be extremely rare - nonce = p.generateNonce() - } - - var payload *pendingPayload - if payload, err = newPendingPayload(events, nonce, p.config.Timeout); err != nil { - return - } - - // Save pending payload until we receive ack, and discard buffer - p.pending_payloads[nonce] = payload - if p.first_payload == nil { - p.first_payload = payload - } else { - p.last_payload.next = payload - } - p.last_payload = payload - - p.Lock() - p.num_payloads++ - p.Unlock() - - return p.transport.Write("JDAT", payload.payload) + // Calculate a nonce + nonce := p.generateNonce() + for { + if _, found := p.pending_payloads[nonce]; !found { + break + } + // Collision - generate again - should be extremely rare + nonce = p.generateNonce() + } + + var payload *pendingPayload + if payload, err = newPendingPayload(events, nonce, p.config.Timeout); err != nil { + return + } + + // Save pending payload until we receive ack, and discard buffer + p.pending_payloads[nonce] = payload + if p.first_payload == nil { + p.first_payload = payload + } else { + p.last_payload.next = payload + } + p.last_payload = payload + + p.Lock() + p.num_payloads++ + p.Unlock() + + return p.transport.Write("JDAT", payload.payload) } func (p *Publisher) processPong(message []byte) error { - if len(message) != 0 { - return fmt.Errorf("PONG message overflow (%d)", len(message)) - } + if len(message) != 0 { + return fmt.Errorf("PONG message overflow (%d)", len(message)) + } - // Were we pending a ping? - if !p.pending_ping { - return errors.New("Unexpected PONG received") - } + // Were we pending a ping? + if !p.pending_ping { + return errors.New("Unexpected PONG received") + } - log.Debug("PONG message received") + log.Debug("PONG message received") - p.pending_ping = false - return nil + p.pending_ping = false + return nil } func (p *Publisher) processAck(message []byte) (err error) { - if len(message) != 20 { - err = fmt.Errorf("ACKN message corruption (%d)", len(message)) - return - } - - // Read the nonce and sequence number acked - nonce, sequence := string(message[:16]), binary.BigEndian.Uint32(message[16:20]) - - log.Debug("ACKN message received for payload %x sequence %d", nonce, sequence) - - // Grab the payload the ACK corresponds to by using nonce - payload, found := p.pending_payloads[nonce] - if !found { - // Don't fail here in case we had temporary issues and resend a payload, only for us to receive duplicate ACKN - return - } - - ack_events := payload.ack_events - - // Process ACK - lines, complete := payload.Ack(int(sequence)) - p.line_count += int64(lines) - - if complete { - // No more events left for this payload, remove from pending list - delete(p.pending_payloads, nonce) - } - - // We potentially receive out-of-order ACKs due to payloads distributed across servers - // This is where we enforce ordering again to ensure registrar receives ACK in order - if payload == p.first_payload { - // The out of sync count we have will never include the first payload, so - // take the value +1 - out_of_sync := p.out_of_sync + 1 - - // For each full payload we mark off, we decrease this count, the first we - // mark off will always be the first payload - thus the +1. Subsequent - // payloads are the out of sync ones - so if we mark them off we decrease - // the out of sync count - for payload.HasAck() { - p.registrar_spool.Add(registrar.NewAckEvent(payload.Rollup())) - - if !payload.Complete() { - break - } - - payload = payload.next - p.first_payload = payload - out_of_sync-- - p.out_of_sync = out_of_sync - - p.Lock() - p.num_payloads-- - p.Unlock() - - // Resume sending if we stopped due to excessive pending payload count - if !p.shutdown && p.can_send == nil { - p.can_send = p.transport.CanSend() - } - - if payload == nil { - break - } - } - - p.registrar_spool.Send() - } else if ack_events == 0 { - // If this is NOT the first payload, and this is the first acknowledgement - // for this payload, then increase out of sync payload count - p.out_of_sync++ - } - - return + if len(message) != 20 { + err = fmt.Errorf("ACKN message corruption (%d)", len(message)) + return + } + + // Read the nonce and sequence number acked + nonce, sequence := string(message[:16]), binary.BigEndian.Uint32(message[16:20]) + + log.Debug("ACKN message received for payload %x sequence %d", nonce, sequence) + + // Grab the payload the ACK corresponds to by using nonce + payload, found := p.pending_payloads[nonce] + if !found { + // Don't fail here in case we had temporary issues and resend a payload, only for us to receive duplicate ACKN + return + } + + ack_events := payload.ack_events + + // Process ACK + lines, complete := payload.Ack(int(sequence)) + p.line_count += int64(lines) + + if complete { + // No more events left for this payload, remove from pending list + delete(p.pending_payloads, nonce) + } + + // We potentially receive out-of-order ACKs due to payloads distributed across servers + // This is where we enforce ordering again to ensure registrar receives ACK in order + if payload == p.first_payload { + // The out of sync count we have will never include the first payload, so + // take the value +1 + out_of_sync := p.out_of_sync + 1 + + // For each full payload we mark off, we decrease this count, the first we + // mark off will always be the first payload - thus the +1. Subsequent + // payloads are the out of sync ones - so if we mark them off we decrease + // the out of sync count + for payload.HasAck() { + p.registrar_spool.Add(registrar.NewAckEvent(payload.Rollup())) + + if !payload.Complete() { + break + } + + payload = payload.next + p.first_payload = payload + out_of_sync-- + p.out_of_sync = out_of_sync + + p.Lock() + p.num_payloads-- + p.Unlock() + + // Resume sending if we stopped due to excessive pending payload count + if !p.shutdown && p.can_send == nil { + p.can_send = p.transport.CanSend() + } + + if payload == nil { + break + } + } + + p.registrar_spool.Send() + } else if ack_events == 0 { + // If this is NOT the first payload, and this is the first acknowledgement + // for this payload, then increase out of sync payload count + p.out_of_sync++ + } + + return } func (p *Publisher) Snapshot() []*core.Snapshot { - p.RLock() + p.RLock() - snapshot := core.NewSnapshot("Publisher") + snapshot := core.NewSnapshot("Publisher") - switch p.status { - case Status_Connected: - snapshot.AddEntry("Status", "Connected") - case Status_Reconnecting: - snapshot.AddEntry("Status", "Reconnecting...") - default: - snapshot.AddEntry("Status", "Disconnected") - } + switch p.status { + case Status_Connected: + snapshot.AddEntry("Status", "Connected") + case Status_Reconnecting: + snapshot.AddEntry("Status", "Reconnecting...") + default: + snapshot.AddEntry("Status", "Disconnected") + } - snapshot.AddEntry("Speed (Lps)", p.line_speed) - snapshot.AddEntry("Published lines", p.last_line_count) - snapshot.AddEntry("Pending Payloads", p.num_payloads) - snapshot.AddEntry("Timeouts", p.timeout_count) - snapshot.AddEntry("Retransmissions", p.last_retry_count) + snapshot.AddEntry("Speed (Lps)", p.line_speed) + snapshot.AddEntry("Published lines", p.last_line_count) + snapshot.AddEntry("Pending Payloads", p.num_payloads) + snapshot.AddEntry("Timeouts", p.timeout_count) + snapshot.AddEntry("Retransmissions", p.last_retry_count) - p.RUnlock() + p.RUnlock() - return []*core.Snapshot{snapshot} + return []*core.Snapshot{snapshot} } diff --git a/src/lc-lib/registrar/event_ack.go b/src/lc-lib/registrar/event_ack.go index 331f68c9..70b0c546 100644 --- a/src/lc-lib/registrar/event_ack.go +++ b/src/lc-lib/registrar/event_ack.go @@ -17,33 +17,33 @@ package registrar import ( - "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type AckEvent struct { - events []*core.EventDescriptor + events []*core.EventDescriptor } func NewAckEvent(events []*core.EventDescriptor) *AckEvent { - return &AckEvent{ - events: events, - } + return &AckEvent{ + events: events, + } } func (e *AckEvent) Process(state map[core.Stream]*FileState) { - if len(e.events) == 1 { - log.Debug("Registrar received offsets for %d log entries", len(e.events)) - } else { - log.Debug("Registrar received offsets for %d log entries", len(e.events)) - } + if len(e.events) == 1 { + log.Debug("Registrar received offsets for %d log entries", len(e.events)) + } else { + log.Debug("Registrar received offsets for %d log entries", len(e.events)) + } - for _, event := range e.events { - _, is_found := state[event.Stream] - if !is_found { - // This is probably stdin then or a deleted file we can't resume - continue - } + for _, event := range e.events { + _, is_found := state[event.Stream] + if !is_found { + // This is probably stdin then or a deleted file we can't resume + continue + } - state[event.Stream].Offset = event.Offset - } + state[event.Stream].Offset = event.Offset + } } diff --git a/src/lc-lib/registrar/event_deleted.go b/src/lc-lib/registrar/event_deleted.go index 28f9af52..473556d8 100644 --- a/src/lc-lib/registrar/event_deleted.go +++ b/src/lc-lib/registrar/event_deleted.go @@ -17,27 +17,27 @@ package registrar import ( - "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type DeletedEvent struct { - stream core.Stream + stream core.Stream } func NewDeletedEvent(stream core.Stream) *DeletedEvent { - return &DeletedEvent{ - stream: stream, - } + return &DeletedEvent{ + stream: stream, + } } func (e *DeletedEvent) Process(state map[core.Stream]*FileState) { - if _, ok := state[e.stream]; ok { - log.Debug("Registrar received a deletion event for %s", *state[e.stream].Source) - } else { - log.Warning("Registrar received a deletion event for UNKNOWN (%p)", e.stream) - } + if _, ok := state[e.stream]; ok { + log.Debug("Registrar received a deletion event for %s", *state[e.stream].Source) + } else { + log.Warning("Registrar received a deletion event for UNKNOWN (%p)", e.stream) + } - // Purge the registrar entry - means the file is deleted so we can't resume - // This keeps the state clean so it doesn't build up after thousands of log files - delete(state, e.stream) + // Purge the registrar entry - means the file is deleted so we can't resume + // This keeps the state clean so it doesn't build up after thousands of log files + delete(state, e.stream) } diff --git a/src/lc-lib/registrar/event_discover.go b/src/lc-lib/registrar/event_discover.go index 213bf262..2bac27b2 100644 --- a/src/lc-lib/registrar/event_discover.go +++ b/src/lc-lib/registrar/event_discover.go @@ -17,33 +17,33 @@ package registrar import ( - "github.com/driskell/log-courier/src/lc-lib/core" - "os" + "github.com/driskell/log-courier/src/lc-lib/core" + "os" ) type DiscoverEvent struct { - stream core.Stream - source string - offset int64 - fileinfo os.FileInfo + stream core.Stream + source string + offset int64 + fileinfo os.FileInfo } func NewDiscoverEvent(stream core.Stream, source string, offset int64, fileinfo os.FileInfo) *DiscoverEvent { - return &DiscoverEvent{ - stream: stream, - source: source, - offset: offset, - fileinfo: fileinfo, - } + return &DiscoverEvent{ + stream: stream, + source: source, + offset: offset, + fileinfo: fileinfo, + } } func (e *DiscoverEvent) Process(state map[core.Stream]*FileState) { - log.Debug("Registrar received a new file event for %s", e.source) + log.Debug("Registrar received a new file event for %s", e.source) - // A new file we need to save offset information for so we can resume - state[e.stream] = &FileState{ - Source: &e.source, - Offset: e.offset, - } - state[e.stream].PopulateFileIds(e.fileinfo) + // A new file we need to save offset information for so we can resume + state[e.stream] = &FileState{ + Source: &e.source, + Offset: e.offset, + } + state[e.stream].PopulateFileIds(e.fileinfo) } diff --git a/src/lc-lib/registrar/event_renamed.go b/src/lc-lib/registrar/event_renamed.go index eb408dc2..6c55a7e7 100644 --- a/src/lc-lib/registrar/event_renamed.go +++ b/src/lc-lib/registrar/event_renamed.go @@ -17,30 +17,30 @@ package registrar import ( - "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type RenamedEvent struct { - stream core.Stream - source string + stream core.Stream + source string } func NewRenamedEvent(stream core.Stream, source string) *RenamedEvent { - return &RenamedEvent{ - stream: stream, - source: source, - } + return &RenamedEvent{ + stream: stream, + source: source, + } } func (e *RenamedEvent) Process(state map[core.Stream]*FileState) { - _, is_found := state[e.stream] - if !is_found { - // This is probably stdin or a deleted file we can't resume - return - } + _, is_found := state[e.stream] + if !is_found { + // This is probably stdin or a deleted file we can't resume + return + } - log.Debug("Registrar received a rename event for %s -> %s", state[e.stream].Source, e.source) + log.Debug("Registrar received a rename event for %s -> %s", state[e.stream].Source, e.source) - // Update the stored file name - state[e.stream].Source = &e.source + // Update the stored file name + state[e.stream].Source = &e.source } diff --git a/src/lc-lib/registrar/eventspool.go b/src/lc-lib/registrar/eventspool.go index 0be0f594..5cfdcabc 100644 --- a/src/lc-lib/registrar/eventspool.go +++ b/src/lc-lib/registrar/eventspool.go @@ -17,41 +17,41 @@ package registrar import ( - "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/core" ) type RegistrarEvent interface { - Process(state map[core.Stream]*FileState) + Process(state map[core.Stream]*FileState) } type RegistrarEventSpool struct { - registrar *Registrar - events []RegistrarEvent + registrar *Registrar + events []RegistrarEvent } func newRegistrarEventSpool(r *Registrar) *RegistrarEventSpool { - ret := &RegistrarEventSpool{ - registrar: r, - } - ret.reset() - return ret + ret := &RegistrarEventSpool{ + registrar: r, + } + ret.reset() + return ret } func (r *RegistrarEventSpool) Close() { - r.registrar.dereferenceSpooler() + r.registrar.dereferenceSpooler() } func (r *RegistrarEventSpool) Add(event RegistrarEvent) { - r.events = append(r.events, event) + r.events = append(r.events, event) } func (r *RegistrarEventSpool) Send() { - if len(r.events) != 0 { - r.registrar.registrar_chan <- r.events - r.reset() - } + if len(r.events) != 0 { + r.registrar.registrar_chan <- r.events + r.reset() + } } func (r *RegistrarEventSpool) reset() { - r.events = make([]RegistrarEvent, 0, 0) + r.events = make([]RegistrarEvent, 0, 0) } diff --git a/src/lc-lib/registrar/filestate.go b/src/lc-lib/registrar/filestate.go index b6e6b40c..2432d675 100644 --- a/src/lc-lib/registrar/filestate.go +++ b/src/lc-lib/registrar/filestate.go @@ -20,48 +20,48 @@ package registrar import ( - "os" + "os" ) type FileState struct { - FileStateOS - Source *string `json:"source,omitempty"` - Offset int64 `json:"offset,omitempty"` + FileStateOS + Source *string `json:"source,omitempty"` + Offset int64 `json:"offset,omitempty"` } type FileInfo struct { - fileinfo os.FileInfo + fileinfo os.FileInfo } func NewFileInfo(fileinfo os.FileInfo) *FileInfo { - return &FileInfo{ - fileinfo: fileinfo, - } + return &FileInfo{ + fileinfo: fileinfo, + } } func (fs *FileInfo) SameAs(info os.FileInfo) bool { - return os.SameFile(info, fs.fileinfo) + return os.SameFile(info, fs.fileinfo) } func (fs *FileInfo) Stat() os.FileInfo { - return fs.fileinfo + return fs.fileinfo } func (fs *FileInfo) Update(fileinfo os.FileInfo, identity *FileIdentity) { - fs.fileinfo = fileinfo + fs.fileinfo = fileinfo } func (fs *FileState) Stat() os.FileInfo { - return nil + return nil } func (fs *FileState) Update(fileinfo os.FileInfo, identity *FileIdentity) { - // Promote to a FileInfo - (*identity) = NewFileInfo(fileinfo) + // Promote to a FileInfo + (*identity) = NewFileInfo(fileinfo) } type FileIdentity interface { - SameAs(os.FileInfo) bool - Stat() os.FileInfo - Update(os.FileInfo, *FileIdentity) + SameAs(os.FileInfo) bool + Stat() os.FileInfo + Update(os.FileInfo, *FileIdentity) } diff --git a/src/lc-lib/registrar/filestateos_darwin.go b/src/lc-lib/registrar/filestateos_darwin.go index f35c9bd8..7f316747 100644 --- a/src/lc-lib/registrar/filestateos_darwin.go +++ b/src/lc-lib/registrar/filestateos_darwin.go @@ -20,23 +20,23 @@ package registrar import ( - "os" - "syscall" + "os" + "syscall" ) type FileStateOS struct { - Inode uint64 `json:"inode,omitempty"` - Device int32 `json:"device,omitempty"` + Inode uint64 `json:"inode,omitempty"` + Device int32 `json:"device,omitempty"` } func (fs *FileStateOS) PopulateFileIds(info os.FileInfo) { - fstat := info.Sys().(*syscall.Stat_t) - fs.Inode = fstat.Ino - fs.Device = fstat.Dev + fstat := info.Sys().(*syscall.Stat_t) + fs.Inode = fstat.Ino + fs.Device = fstat.Dev } func (fs *FileStateOS) SameAs(info os.FileInfo) bool { - state := &FileStateOS{} - state.PopulateFileIds(info) - return (fs.Inode == state.Inode && fs.Device == state.Device) + state := &FileStateOS{} + state.PopulateFileIds(info) + return (fs.Inode == state.Inode && fs.Device == state.Device) } diff --git a/src/lc-lib/registrar/filestateos_freebsd.go b/src/lc-lib/registrar/filestateos_freebsd.go index 8ff3f614..666fe29a 100644 --- a/src/lc-lib/registrar/filestateos_freebsd.go +++ b/src/lc-lib/registrar/filestateos_freebsd.go @@ -20,23 +20,23 @@ package registrar import ( - "os" - "syscall" + "os" + "syscall" ) type FileStateOS struct { - Inode uint32 `json:"inode,omitempty"` - Device uint32 `json:"device,omitempty"` + Inode uint32 `json:"inode,omitempty"` + Device uint32 `json:"device,omitempty"` } func (fs *FileStateOS) PopulateFileIds(info os.FileInfo) { - fstat := info.Sys().(*syscall.Stat_t) - fs.Inode = fstat.Ino - fs.Device = fstat.Dev + fstat := info.Sys().(*syscall.Stat_t) + fs.Inode = fstat.Ino + fs.Device = fstat.Dev } func (fs *FileStateOS) SameAs(info os.FileInfo) bool { - state := &FileStateOS{} - state.PopulateFileIds(info) - return (fs.Inode == state.Inode && fs.Device == state.Device) + state := &FileStateOS{} + state.PopulateFileIds(info) + return (fs.Inode == state.Inode && fs.Device == state.Device) } diff --git a/src/lc-lib/registrar/filestateos_linux.go b/src/lc-lib/registrar/filestateos_linux.go index 9f030c29..26bc36a2 100644 --- a/src/lc-lib/registrar/filestateos_linux.go +++ b/src/lc-lib/registrar/filestateos_linux.go @@ -20,23 +20,23 @@ package registrar import ( - "os" - "syscall" + "os" + "syscall" ) type FileStateOS struct { - Inode uint64 `json:"inode,omitempty"` - Device uint64 `json:"device,omitempty"` + Inode uint64 `json:"inode,omitempty"` + Device uint64 `json:"device,omitempty"` } func (fs *FileStateOS) PopulateFileIds(info os.FileInfo) { - fstat := info.Sys().(*syscall.Stat_t) - fs.Inode = fstat.Ino - fs.Device = fstat.Dev + fstat := info.Sys().(*syscall.Stat_t) + fs.Inode = fstat.Ino + fs.Device = fstat.Dev } func (fs *FileStateOS) SameAs(info os.FileInfo) bool { - state := &FileStateOS{} - state.PopulateFileIds(info) - return (fs.Inode == state.Inode && fs.Device == state.Device) + state := &FileStateOS{} + state.PopulateFileIds(info) + return (fs.Inode == state.Inode && fs.Device == state.Device) } diff --git a/src/lc-lib/registrar/filestateos_openbsd.go b/src/lc-lib/registrar/filestateos_openbsd.go index f35c9bd8..7f316747 100644 --- a/src/lc-lib/registrar/filestateos_openbsd.go +++ b/src/lc-lib/registrar/filestateos_openbsd.go @@ -20,23 +20,23 @@ package registrar import ( - "os" - "syscall" + "os" + "syscall" ) type FileStateOS struct { - Inode uint64 `json:"inode,omitempty"` - Device int32 `json:"device,omitempty"` + Inode uint64 `json:"inode,omitempty"` + Device int32 `json:"device,omitempty"` } func (fs *FileStateOS) PopulateFileIds(info os.FileInfo) { - fstat := info.Sys().(*syscall.Stat_t) - fs.Inode = fstat.Ino - fs.Device = fstat.Dev + fstat := info.Sys().(*syscall.Stat_t) + fs.Inode = fstat.Ino + fs.Device = fstat.Dev } func (fs *FileStateOS) SameAs(info os.FileInfo) bool { - state := &FileStateOS{} - state.PopulateFileIds(info) - return (fs.Inode == state.Inode && fs.Device == state.Device) + state := &FileStateOS{} + state.PopulateFileIds(info) + return (fs.Inode == state.Inode && fs.Device == state.Device) } diff --git a/src/lc-lib/registrar/filestateos_windows.go b/src/lc-lib/registrar/filestateos_windows.go index 5c15ebcb..9be98d73 100644 --- a/src/lc-lib/registrar/filestateos_windows.go +++ b/src/lc-lib/registrar/filestateos_windows.go @@ -20,55 +20,55 @@ package registrar import ( - "os" - "reflect" + "os" + "reflect" ) type FileStateOS struct { - Vol uint32 `json:"vol,omitempty"` - IdxHi uint32 `json:"idxhi,omitempty"` - IdxLo uint32 `json:"idxlo,omitempty"` + Vol uint32 `json:"vol,omitempty"` + IdxHi uint32 `json:"idxhi,omitempty"` + IdxLo uint32 `json:"idxlo,omitempty"` } func (fs *FileStateOS) PopulateFileIds(info os.FileInfo) { - // For information on the following, see Go source: src/pkg/os/types_windows.go - // This is the only way we can get at the idxhi and idxlo - // Unix it is much easier as syscall.Stat_t is exposed and os.FileInfo interface has a Sys() method to get a syscall.Stat_t - // Unfortunately, the relevant Windows information is in a private struct so we have to dig inside + // For information on the following, see Go source: src/pkg/os/types_windows.go + // This is the only way we can get at the idxhi and idxlo + // Unix it is much easier as syscall.Stat_t is exposed and os.FileInfo interface has a Sys() method to get a syscall.Stat_t + // Unfortunately, the relevant Windows information is in a private struct so we have to dig inside - // NOTE: This WILL be prone to break if Go source changes, but I'd rather just fix it if it does or make it fail gracefully + // NOTE: This WILL be prone to break if Go source changes, but I'd rather just fix it if it does or make it fail gracefully - // info is os.FileInfo which is an interface to a - // - *os.fileStat (holding methods) which is a pointer to a - // - os.fileStat (holding data) - // ValueOf will pick up the interface contents immediately, so we need a single Elem() + // info is os.FileInfo which is an interface to a + // - *os.fileStat (holding methods) which is a pointer to a + // - os.fileStat (holding data) + // ValueOf will pick up the interface contents immediately, so we need a single Elem() - // Ensure that the numbers are loaded by calling os.SameFile - // os.SameFile will call sameFile (types_windows.go) which will call *os.fileStat's loadFileId - // Reflection panics if we try to call loadFileId directly as its a hidden method; regardless this is much safer and more reliable - os.SameFile(info, info) + // Ensure that the numbers are loaded by calling os.SameFile + // os.SameFile will call sameFile (types_windows.go) which will call *os.fileStat's loadFileId + // Reflection panics if we try to call loadFileId directly as its a hidden method; regardless this is much safer and more reliable + os.SameFile(info, info) - // If any of the following fails, report the library has changed and recover and return 0s - defer func() { - if r := recover(); r != nil { - log.Error("BUG: File rotations that occur while Log Courier is not running will NOT be detected due to an incompatible change to the Go library used for compiling.") - fs.Vol = 0 - fs.IdxHi = 0 - fs.IdxLo = 0 - } - }() + // If any of the following fails, report the library has changed and recover and return 0s + defer func() { + if r := recover(); r != nil { + log.Error("BUG: File rotations that occur while Log Courier is not running will NOT be detected due to an incompatible change to the Go library used for compiling.") + fs.Vol = 0 + fs.IdxHi = 0 + fs.IdxLo = 0 + } + }() - // Following makes fstat hold os.fileStat - fstat := reflect.ValueOf(info).Elem() + // Following makes fstat hold os.fileStat + fstat := reflect.ValueOf(info).Elem() - // To get the data, we need the os.fileStat that fstat points to, so one more Elem() - fs.Vol = uint32(fstat.FieldByName("vol").Uint()) - fs.IdxHi = uint32(fstat.FieldByName("idxhi").Uint()) - fs.IdxLo = uint32(fstat.FieldByName("idxlo").Uint()) + // To get the data, we need the os.fileStat that fstat points to, so one more Elem() + fs.Vol = uint32(fstat.FieldByName("vol").Uint()) + fs.IdxHi = uint32(fstat.FieldByName("idxhi").Uint()) + fs.IdxLo = uint32(fstat.FieldByName("idxlo").Uint()) } func (fs *FileStateOS) SameAs(info os.FileInfo) bool { - state := &FileStateOS{} - state.PopulateFileIds(info) - return (fs.Vol == state.Vol && fs.IdxHi == state.IdxHi && fs.IdxLo == state.IdxLo) + state := &FileStateOS{} + state.PopulateFileIds(info) + return (fs.Vol == state.Vol && fs.IdxHi == state.IdxHi && fs.IdxLo == state.IdxLo) } diff --git a/src/lc-lib/registrar/logging.go b/src/lc-lib/registrar/logging.go index 5bfa8f79..eabf045d 100644 --- a/src/lc-lib/registrar/logging.go +++ b/src/lc-lib/registrar/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("registrar") + log = logging.MustGetLogger("registrar") } diff --git a/src/lc-lib/registrar/registrar.go b/src/lc-lib/registrar/registrar.go index b190e261..3e51481c 100644 --- a/src/lc-lib/registrar/registrar.go +++ b/src/lc-lib/registrar/registrar.go @@ -20,148 +20,148 @@ package registrar import ( - "encoding/json" - "fmt" - "github.com/driskell/log-courier/src/lc-lib/core" - "os" - "sync" + "encoding/json" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "os" + "sync" ) type LoadPreviousFunc func(string, *FileState) (core.Stream, error) type Registrar struct { - core.PipelineSegment + core.PipelineSegment - sync.Mutex + sync.Mutex - registrar_chan chan []RegistrarEvent - references int - persistdir string - statefile string - state map[core.Stream]*FileState + registrar_chan chan []RegistrarEvent + references int + persistdir string + statefile string + state map[core.Stream]*FileState } func NewRegistrar(pipeline *core.Pipeline, persistdir string) *Registrar { - ret := &Registrar{ - registrar_chan: make(chan []RegistrarEvent, 16), // TODO: Make configurable? - persistdir: persistdir, - statefile: ".log-courier", - state: make(map[core.Stream]*FileState), - } + ret := &Registrar{ + registrar_chan: make(chan []RegistrarEvent, 16), // TODO: Make configurable? + persistdir: persistdir, + statefile: ".log-courier", + state: make(map[core.Stream]*FileState), + } - pipeline.Register(ret) + pipeline.Register(ret) - return ret + return ret } func (r *Registrar) LoadPrevious(callback_func LoadPreviousFunc) (have_previous bool, err error) { - data := make(map[string]*FileState) - - // Load the previous state - opening RDWR ensures we can write too and fail early - // c_filename is what we will use to test create capability - filename := r.persistdir + string(os.PathSeparator) + ".log-courier" - c_filename := r.persistdir + string(os.PathSeparator) + ".log-courier.new" - - var f *os.File - f, err = os.OpenFile(filename, os.O_RDWR, 0600) - if err != nil { - // Fail immediately if this is not a path not found error - if !os.IsNotExist(err) { - return - } - - // Try the .new file - maybe we failed mid-move - filename, c_filename = c_filename, filename - f, err = os.OpenFile(filename, os.O_RDWR, 0600) - } - - if err != nil { - // Did we fail, or did it just not exist? - if !os.IsNotExist(err) { - return - } - return false, nil - } - - // Parse the data - log.Notice("Loading registrar data from %s", filename) - have_previous = true - - decoder := json.NewDecoder(f) - decoder.Decode(&data) - f.Close() - - r.state = make(map[core.Stream]*FileState, len(data)) - - var stream core.Stream - for file, state := range data { - if stream, err = callback_func(file, state); err != nil { - return - } - r.state[stream] = state - } - - // Test we can successfully save new states by attempting to save now - if err = r.writeRegistry(); err != nil { - return false, fmt.Errorf("Registry write failed: %s", err) - } - - return + data := make(map[string]*FileState) + + // Load the previous state - opening RDWR ensures we can write too and fail early + // c_filename is what we will use to test create capability + filename := r.persistdir + string(os.PathSeparator) + ".log-courier" + c_filename := r.persistdir + string(os.PathSeparator) + ".log-courier.new" + + var f *os.File + f, err = os.OpenFile(filename, os.O_RDWR, 0600) + if err != nil { + // Fail immediately if this is not a path not found error + if !os.IsNotExist(err) { + return + } + + // Try the .new file - maybe we failed mid-move + filename, c_filename = c_filename, filename + f, err = os.OpenFile(filename, os.O_RDWR, 0600) + } + + if err != nil { + // Did we fail, or did it just not exist? + if !os.IsNotExist(err) { + return + } + return false, nil + } + + // Parse the data + log.Notice("Loading registrar data from %s", filename) + have_previous = true + + decoder := json.NewDecoder(f) + decoder.Decode(&data) + f.Close() + + r.state = make(map[core.Stream]*FileState, len(data)) + + var stream core.Stream + for file, state := range data { + if stream, err = callback_func(file, state); err != nil { + return + } + r.state[stream] = state + } + + // Test we can successfully save new states by attempting to save now + if err = r.writeRegistry(); err != nil { + return false, fmt.Errorf("Registry write failed: %s", err) + } + + return } func (r *Registrar) Connect() *RegistrarEventSpool { - r.Lock() - ret := newRegistrarEventSpool(r) - r.references++ - r.Unlock() - return ret + r.Lock() + ret := newRegistrarEventSpool(r) + r.references++ + r.Unlock() + return ret } func (r *Registrar) dereferenceSpooler() { - r.Lock() - r.references-- - if r.references == 0 { - // Shutdown registrar, all references are closed - close(r.registrar_chan) - } - r.Unlock() + r.Lock() + r.references-- + if r.references == 0 { + // Shutdown registrar, all references are closed + close(r.registrar_chan) + } + r.Unlock() } func (r *Registrar) toCanonical() (canonical map[string]*FileState) { - canonical = make(map[string]*FileState, len(r.state)) - for _, value := range r.state { - if _, ok := canonical[*value.Source]; ok { - // We should never allow this - report an error - log.Error("BUG: Unexpected registrar conflict detected for %s", *value.Source) - } - canonical[*value.Source] = value - } - return + canonical = make(map[string]*FileState, len(r.state)) + for _, value := range r.state { + if _, ok := canonical[*value.Source]; ok { + // We should never allow this - report an error + log.Error("BUG: Unexpected registrar conflict detected for %s", *value.Source) + } + canonical[*value.Source] = value + } + return } func (r *Registrar) Run() { - defer func() { - r.Done() - }() + defer func() { + r.Done() + }() RegistrarLoop: - for { - // Ignore shutdown channel - wait for registrar to close - select { - case spool := <-r.registrar_chan: - if spool == nil { - break RegistrarLoop - } - - for _, event := range spool { - event.Process(r.state) - } - - if err := r.writeRegistry(); err != nil { - log.Error("Registry write failed: %s", err) - } - } - } - - log.Info("Registrar exiting") + for { + // Ignore shutdown channel - wait for registrar to close + select { + case spool := <-r.registrar_chan: + if spool == nil { + break RegistrarLoop + } + + for _, event := range spool { + event.Process(r.state) + } + + if err := r.writeRegistry(); err != nil { + log.Error("Registry write failed: %s", err) + } + } + } + + log.Info("Registrar exiting") } diff --git a/src/lc-lib/registrar/registrar_other.go b/src/lc-lib/registrar/registrar_other.go index 9bebe108..7cbaacc5 100644 --- a/src/lc-lib/registrar/registrar_other.go +++ b/src/lc-lib/registrar/registrar_other.go @@ -22,22 +22,22 @@ package registrar import ( - "encoding/json" - "os" + "encoding/json" + "os" ) func (r *Registrar) writeRegistry() error { - // Open tmp file, write, flush, rename - fname := r.persistdir + string(os.PathSeparator) + r.statefile - tname := fname + ".new" - file, err := os.Create(tname) - if err != nil { - return err - } - defer file.Close() + // Open tmp file, write, flush, rename + fname := r.persistdir + string(os.PathSeparator) + r.statefile + tname := fname + ".new" + file, err := os.Create(tname) + if err != nil { + return err + } + defer file.Close() - encoder := json.NewEncoder(file) - encoder.Encode(r.toCanonical()) + encoder := json.NewEncoder(file) + encoder.Encode(r.toCanonical()) - return os.Rename(tname, fname) + return os.Rename(tname, fname) } diff --git a/src/lc-lib/registrar/registrar_windows.go b/src/lc-lib/registrar/registrar_windows.go index fdcce814..da9ace19 100644 --- a/src/lc-lib/registrar/registrar_windows.go +++ b/src/lc-lib/registrar/registrar_windows.go @@ -20,32 +20,32 @@ package registrar import ( - "encoding/json" - "fmt" - "os" + "encoding/json" + "fmt" + "os" ) func (r *Registrar) writeRegistry() error { - fname := r.persistdir + string(os.PathSeparator) + r.statefile - tname := fname + ".new" - file, err := os.Create(tname) - if err != nil { - return err - } + fname := r.persistdir + string(os.PathSeparator) + r.statefile + tname := fname + ".new" + file, err := os.Create(tname) + if err != nil { + return err + } - encoder := json.NewEncoder(file) - encoder.Encode(r.toCanonical()) - file.Close() + encoder := json.NewEncoder(file) + encoder.Encode(r.toCanonical()) + file.Close() - var d_err error - if _, err = os.Stat(fname); err == nil || !os.IsNotExist(err) { - d_err = os.Remove(fname) - } + var d_err error + if _, err = os.Stat(fname); err == nil || !os.IsNotExist(err) { + d_err = os.Remove(fname) + } - err = os.Rename(tname, fname) - if err != nil { - return fmt.Errorf("%s -> %s", d_err, err) - } + err = os.Rename(tname, fname) + if err != nil { + return fmt.Errorf("%s -> %s", d_err, err) + } - return nil + return nil } diff --git a/src/lc-lib/spooler/logging.go b/src/lc-lib/spooler/logging.go index 44667972..a86c3077 100644 --- a/src/lc-lib/spooler/logging.go +++ b/src/lc-lib/spooler/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("spooler") + log = logging.MustGetLogger("spooler") } diff --git a/src/lc-lib/spooler/spooler.go b/src/lc-lib/spooler/spooler.go index a34a7052..02fe671b 100644 --- a/src/lc-lib/spooler/spooler.go +++ b/src/lc-lib/spooler/spooler.go @@ -20,152 +20,152 @@ package spooler import ( - "github.com/driskell/log-courier/src/lc-lib/core" - "github.com/driskell/log-courier/src/lc-lib/publisher" - "time" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/publisher" + "time" ) const ( - // Event header is just uint32 at the moment - event_header_size = 4 + // Event header is just uint32 at the moment + event_header_size = 4 ) type Spooler struct { - core.PipelineSegment - core.PipelineConfigReceiver - - config *core.GeneralConfig - spool []*core.EventDescriptor - spool_size int - input chan *core.EventDescriptor - output chan<- []*core.EventDescriptor - timer_start time.Time - timer *time.Timer + core.PipelineSegment + core.PipelineConfigReceiver + + config *core.GeneralConfig + spool []*core.EventDescriptor + spool_size int + input chan *core.EventDescriptor + output chan<- []*core.EventDescriptor + timer_start time.Time + timer *time.Timer } func NewSpooler(pipeline *core.Pipeline, config *core.GeneralConfig, publisher_imp *publisher.Publisher) *Spooler { - ret := &Spooler{ - config: config, - spool: make([]*core.EventDescriptor, 0, config.SpoolSize), - input: make(chan *core.EventDescriptor, 16), // TODO: Make configurable? - output: publisher_imp.Connect(), - } + ret := &Spooler{ + config: config, + spool: make([]*core.EventDescriptor, 0, config.SpoolSize), + input: make(chan *core.EventDescriptor, 16), // TODO: Make configurable? + output: publisher_imp.Connect(), + } - pipeline.Register(ret) + pipeline.Register(ret) - return ret + return ret } func (s *Spooler) Connect() chan<- *core.EventDescriptor { - return s.input + return s.input } func (s *Spooler) Run() { - defer func() { - s.Done() - }() + defer func() { + s.Done() + }() - s.timer_start = time.Now() - s.timer = time.NewTimer(s.config.SpoolTimeout) + s.timer_start = time.Now() + s.timer = time.NewTimer(s.config.SpoolTimeout) SpoolerLoop: - for { - select { - case event := <-s.input: - if len(s.spool) > 0 && int64(s.spool_size) + int64(len(event.Event)) + event_header_size >= s.config.SpoolMaxBytes { - log.Debug("Spooler flushing %d events due to spool max bytes (%d/%d - next is %d)", len(s.spool), s.spool_size, s.config.SpoolMaxBytes, len(event.Event) + 4) - - // Can't fit this event in the spool - flush and then queue - if !s.sendSpool() { - break SpoolerLoop - } - - s.resetTimer() - s.spool_size += len(event.Event) + event_header_size - s.spool = append(s.spool, event) - - continue - } - - s.spool_size += len(event.Event) + event_header_size - s.spool = append(s.spool, event) - - // Flush if full - if len(s.spool) >= cap(s.spool) { - log.Debug("Spooler flushing %d events due to spool size reached", len(s.spool)) - - if !s.sendSpool() { - break SpoolerLoop - } - - s.resetTimer() - } - case <-s.timer.C: - // Flush what we have, if anything - if len(s.spool) > 0 { - log.Debug("Spooler flushing %d events due to spool timeout exceeded", len(s.spool)) - - if !s.sendSpool() { - break SpoolerLoop - } - } - - s.resetTimer() - case <-s.OnShutdown(): - break SpoolerLoop - case config := <-s.OnConfig(): - if !s.reloadConfig(config) { - break SpoolerLoop - } - } - } - - log.Info("Spooler exiting") + for { + select { + case event := <-s.input: + if len(s.spool) > 0 && int64(s.spool_size)+int64(len(event.Event))+event_header_size >= s.config.SpoolMaxBytes { + log.Debug("Spooler flushing %d events due to spool max bytes (%d/%d - next is %d)", len(s.spool), s.spool_size, s.config.SpoolMaxBytes, len(event.Event)+4) + + // Can't fit this event in the spool - flush and then queue + if !s.sendSpool() { + break SpoolerLoop + } + + s.resetTimer() + s.spool_size += len(event.Event) + event_header_size + s.spool = append(s.spool, event) + + continue + } + + s.spool_size += len(event.Event) + event_header_size + s.spool = append(s.spool, event) + + // Flush if full + if len(s.spool) >= cap(s.spool) { + log.Debug("Spooler flushing %d events due to spool size reached", len(s.spool)) + + if !s.sendSpool() { + break SpoolerLoop + } + + s.resetTimer() + } + case <-s.timer.C: + // Flush what we have, if anything + if len(s.spool) > 0 { + log.Debug("Spooler flushing %d events due to spool timeout exceeded", len(s.spool)) + + if !s.sendSpool() { + break SpoolerLoop + } + } + + s.resetTimer() + case <-s.OnShutdown(): + break SpoolerLoop + case config := <-s.OnConfig(): + if !s.reloadConfig(config) { + break SpoolerLoop + } + } + } + + log.Info("Spooler exiting") } func (s *Spooler) sendSpool() bool { - select { - case <-s.OnShutdown(): - return false - case config := <-s.OnConfig(): - if !s.reloadConfig(config) { - return false - } - case s.output <- s.spool: - } - - s.spool = make([]*core.EventDescriptor, 0, s.config.SpoolSize) - s.spool_size = 0 - - return true + select { + case <-s.OnShutdown(): + return false + case config := <-s.OnConfig(): + if !s.reloadConfig(config) { + return false + } + case s.output <- s.spool: + } + + s.spool = make([]*core.EventDescriptor, 0, s.config.SpoolSize) + s.spool_size = 0 + + return true } func (s *Spooler) resetTimer() { - s.timer_start = time.Now() - - // Stop the timer, and ensure the channel is empty before restarting it - s.timer.Stop() - select { - case <-s.timer.C: - default: - } - s.timer.Reset(s.config.SpoolTimeout) + s.timer_start = time.Now() + + // Stop the timer, and ensure the channel is empty before restarting it + s.timer.Stop() + select { + case <-s.timer.C: + default: + } + s.timer.Reset(s.config.SpoolTimeout) } func (s *Spooler) reloadConfig(config *core.Config) bool { - s.config = &config.General - - // Immediate flush? - passed := time.Now().Sub(s.timer_start) - if passed >= s.config.SpoolTimeout || len(s.spool) >= int(s.config.SpoolSize) { - if !s.sendSpool() { - return false - } - s.timer_start = time.Now() - s.timer.Reset(s.config.SpoolTimeout) - } else { - s.timer.Reset(passed - s.config.SpoolTimeout) - } - - return true + s.config = &config.General + + // Immediate flush? + passed := time.Now().Sub(s.timer_start) + if passed >= s.config.SpoolTimeout || len(s.spool) >= int(s.config.SpoolSize) { + if !s.sendSpool() { + return false + } + s.timer_start = time.Now() + s.timer.Reset(s.config.SpoolTimeout) + } else { + s.timer.Reset(passed - s.config.SpoolTimeout) + } + + return true } diff --git a/src/lc-lib/transports/logging.go b/src/lc-lib/transports/logging.go index d2ba8d90..b6c13486 100644 --- a/src/lc-lib/transports/logging.go +++ b/src/lc-lib/transports/logging.go @@ -21,5 +21,5 @@ import "github.com/op/go-logging" var log *logging.Logger func init() { - log = logging.MustGetLogger("transports") + log = logging.MustGetLogger("transports") } diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index 0074fe50..11441890 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -20,19 +20,19 @@ package transports import ( - "bytes" - "crypto/tls" - "crypto/x509" - "encoding/binary" - "encoding/pem" - "fmt" - "io/ioutil" - "github.com/driskell/log-courier/src/lc-lib/core" - "math/rand" - "net" - "regexp" - "sync" - "time" + "bytes" + "crypto/tls" + "crypto/x509" + "encoding/binary" + "encoding/pem" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/core" + "io/ioutil" + "math/rand" + "net" + "regexp" + "sync" + "time" ) // Support for newer SSL signature algorithms @@ -40,399 +40,399 @@ import _ "crypto/sha256" import _ "crypto/sha512" const ( - // Essentially, this is how often we should check for disconnect/shutdown during socket reads - socket_interval_seconds = 1 + // Essentially, this is how often we should check for disconnect/shutdown during socket reads + socket_interval_seconds = 1 ) type TransportTcpRegistrar struct { } type TransportTcpFactory struct { - transport string + transport string - SSLCertificate string `config:"ssl certificate"` - SSLKey string `config:"ssl key"` - SSLCA string `config:"ssl ca"` + SSLCertificate string `config:"ssl certificate"` + SSLKey string `config:"ssl key"` + SSLCA string `config:"ssl ca"` - hostport_re *regexp.Regexp - tls_config tls.Config + hostport_re *regexp.Regexp + tls_config tls.Config } type TransportTcp struct { - config *TransportTcpFactory - net_config *core.NetworkConfig - socket net.Conn - tlssocket *tls.Conn + config *TransportTcpFactory + net_config *core.NetworkConfig + socket net.Conn + tlssocket *tls.Conn - wait sync.WaitGroup - shutdown chan interface{} + wait sync.WaitGroup + shutdown chan interface{} - send_chan chan []byte - recv_chan chan interface{} + send_chan chan []byte + recv_chan chan interface{} - can_send chan int + can_send chan int - roundrobin int - host_is_ip bool - host string - port string - addresses []net.IP + roundrobin int + host_is_ip bool + host string + port string + addresses []net.IP } func NewTcpTransportFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.TransportFactory, error) { - var err error - - ret := &TransportTcpFactory{ - transport: name, - hostport_re: regexp.MustCompile(`^\[?([^]]+)\]?:([0-9]+)$`), - } - - // Only allow SSL configurations if this is "tls" - if name == "tls" { - if err = config.PopulateConfig(ret, config_path, unused); err != nil { - return nil, err - } - - if len(ret.SSLCertificate) > 0 && len(ret.SSLKey) > 0 { - cert, err := tls.LoadX509KeyPair(ret.SSLCertificate, ret.SSLKey) - if err != nil { - return nil, fmt.Errorf("Failed loading client ssl certificate: %s", err) - } - - ret.tls_config.Certificates = []tls.Certificate{cert} - } - - if len(ret.SSLCA) > 0 { - ret.tls_config.RootCAs = x509.NewCertPool() - pemdata, err := ioutil.ReadFile(ret.SSLCA) - if err != nil { - return nil, fmt.Errorf("Failure reading CA certificate: %s\n", err) - } - rest := pemdata - var block *pem.Block - var pemBlockNum = 1 - for { - block, rest = pem.Decode(rest) - if block != nil { - if block.Type != "CERTIFICATE" { - return nil, fmt.Errorf("Block %d does not contain a certificate: %s\n", pemBlockNum, ret.SSLCA) - } - cert, err := x509.ParseCertificate(block.Bytes) - if err != nil { - return nil, fmt.Errorf("Failed to parse CA certificate in block %d: %s\n", pemBlockNum, ret.SSLCA) - } - ret.tls_config.RootCAs.AddCert(cert) - pemBlockNum += 1 - } else { - break - } - } - } - } else { - if err := config.ReportUnusedConfig(config_path, unused); err != nil { - return nil, err - } - } - - return ret, nil + var err error + + ret := &TransportTcpFactory{ + transport: name, + hostport_re: regexp.MustCompile(`^\[?([^]]+)\]?:([0-9]+)$`), + } + + // Only allow SSL configurations if this is "tls" + if name == "tls" { + if err = config.PopulateConfig(ret, config_path, unused); err != nil { + return nil, err + } + + if len(ret.SSLCertificate) > 0 && len(ret.SSLKey) > 0 { + cert, err := tls.LoadX509KeyPair(ret.SSLCertificate, ret.SSLKey) + if err != nil { + return nil, fmt.Errorf("Failed loading client ssl certificate: %s", err) + } + + ret.tls_config.Certificates = []tls.Certificate{cert} + } + + if len(ret.SSLCA) > 0 { + ret.tls_config.RootCAs = x509.NewCertPool() + pemdata, err := ioutil.ReadFile(ret.SSLCA) + if err != nil { + return nil, fmt.Errorf("Failure reading CA certificate: %s\n", err) + } + rest := pemdata + var block *pem.Block + var pemBlockNum = 1 + for { + block, rest = pem.Decode(rest) + if block != nil { + if block.Type != "CERTIFICATE" { + return nil, fmt.Errorf("Block %d does not contain a certificate: %s\n", pemBlockNum, ret.SSLCA) + } + cert, err := x509.ParseCertificate(block.Bytes) + if err != nil { + return nil, fmt.Errorf("Failed to parse CA certificate in block %d: %s\n", pemBlockNum, ret.SSLCA) + } + ret.tls_config.RootCAs.AddCert(cert) + pemBlockNum += 1 + } else { + break + } + } + } + } else { + if err := config.ReportUnusedConfig(config_path, unused); err != nil { + return nil, err + } + } + + return ret, nil } func (f *TransportTcpFactory) NewTransport(config *core.NetworkConfig) (core.Transport, error) { - ret := &TransportTcp{ - config: f, - net_config: config, - } + ret := &TransportTcp{ + config: f, + net_config: config, + } - // Randomise the initial host - after this it will round robin - // Round robin after initial attempt ensures we don't retry same host twice, - // and also ensures we try all hosts one by one - ret.roundrobin = rand.Intn(len(config.Servers)) + // Randomise the initial host - after this it will round robin + // Round robin after initial attempt ensures we don't retry same host twice, + // and also ensures we try all hosts one by one + ret.roundrobin = rand.Intn(len(config.Servers)) - return ret, nil + return ret, nil } func (t *TransportTcp) ReloadConfig(new_net_config *core.NetworkConfig) int { - // Check we can grab new TCP config to compare, if not force transport reinit - new_config, ok := new_net_config.TransportFactory.(*TransportTcpFactory) - if !ok { - return core.Reload_Transport - } + // Check we can grab new TCP config to compare, if not force transport reinit + new_config, ok := new_net_config.TransportFactory.(*TransportTcpFactory) + if !ok { + return core.Reload_Transport + } - // TODO - This does not catch changes to the underlying certificate file! - if new_config.SSLCertificate != t.config.SSLCertificate || new_config.SSLKey != t.config.SSLKey || new_config.SSLCA != t.config.SSLCA { - return core.Reload_Transport - } + // TODO - This does not catch changes to the underlying certificate file! + if new_config.SSLCertificate != t.config.SSLCertificate || new_config.SSLKey != t.config.SSLKey || new_config.SSLCA != t.config.SSLCA { + return core.Reload_Transport + } - // Publisher handles changes to net_config, but ensure we store the latest in case it asks for a reconnect - t.net_config = new_net_config + // Publisher handles changes to net_config, but ensure we store the latest in case it asks for a reconnect + t.net_config = new_net_config - return core.Reload_None + return core.Reload_None } func (t *TransportTcp) Init() error { - if t.shutdown != nil { - t.disconnect() - } - - // Have we exhausted the address list we had? - if t.addresses == nil { - var err error - - // Round robin to the next server - selected := t.net_config.Servers[t.roundrobin%len(t.net_config.Servers)] - t.roundrobin++ - - t.host, t.port, err = net.SplitHostPort(selected) - if err != nil { - return fmt.Errorf("Invalid hostport given: %s", selected) - } - - // Are we an IP? - if ip := net.ParseIP(t.host); ip != nil { - t.host_is_ip = true - t.addresses = []net.IP{ip} - } else { - // Lookup the server in DNS - t.host_is_ip = false - t.addresses, err = net.LookupIP(t.host) - if err != nil { - return fmt.Errorf("DNS lookup failure \"%s\": %s", t.host, err) - } - } - } - - // Try next address and drop it from our list - addressport := net.JoinHostPort(t.addresses[0].String(), t.port) - if len(t.addresses) > 1 { - t.addresses = t.addresses[1:] - } else { - t.addresses = nil - } - - var desc string - if t.host_is_ip { - desc = fmt.Sprintf("%s", addressport) - } else { - desc = fmt.Sprintf("%s (%s)", addressport, t.host) - } - - log.Info("Attempting to connect to %s", desc) - - tcpsocket, err := net.DialTimeout("tcp", addressport, t.net_config.Timeout) - if err != nil { - return fmt.Errorf("Failed to connect to %s: %s", desc, err) - } - - // Now wrap in TLS if this is the "tls" transport - if t.config.transport == "tls" { - // Disable SSLv3 (mitigate POODLE vulnerability) - t.config.tls_config.MinVersion = tls.VersionTLS10 - - // Set the tlsconfig server name for server validation (required since Go 1.3) - t.config.tls_config.ServerName = t.host - - t.tlssocket = tls.Client(&transportTcpWrap{transport: t, tcpsocket: tcpsocket}, &t.config.tls_config) - t.tlssocket.SetDeadline(time.Now().Add(t.net_config.Timeout)) - err = t.tlssocket.Handshake() - if err != nil { - t.tlssocket.Close() - tcpsocket.Close() - return fmt.Errorf("TLS Handshake failure with %s: %s", desc, err) - } - - t.socket = t.tlssocket - } else { - t.socket = tcpsocket - } - - log.Info("Connected to %s", desc) - - // Signal channels - t.shutdown = make(chan interface{}, 1) - t.send_chan = make(chan []byte, 1) - // Buffer of two for recv_chan since both routines may send an error to it - // First error we get back initiates disconnect, thus we must not block routines - t.recv_chan = make(chan interface{}, 2) - t.can_send = make(chan int, 1) - - // Start with a send - t.can_send <- 1 - - t.wait.Add(2) - - // Start separate sender and receiver so we can asynchronously send and receive for max performance - // They have to be different routines too because we don't have cross-platform poll, so they will need to block - // Of course, we'll time out and check shutdown on occasion - go t.sender() - go t.receiver() - - return nil + if t.shutdown != nil { + t.disconnect() + } + + // Have we exhausted the address list we had? + if t.addresses == nil { + var err error + + // Round robin to the next server + selected := t.net_config.Servers[t.roundrobin%len(t.net_config.Servers)] + t.roundrobin++ + + t.host, t.port, err = net.SplitHostPort(selected) + if err != nil { + return fmt.Errorf("Invalid hostport given: %s", selected) + } + + // Are we an IP? + if ip := net.ParseIP(t.host); ip != nil { + t.host_is_ip = true + t.addresses = []net.IP{ip} + } else { + // Lookup the server in DNS + t.host_is_ip = false + t.addresses, err = net.LookupIP(t.host) + if err != nil { + return fmt.Errorf("DNS lookup failure \"%s\": %s", t.host, err) + } + } + } + + // Try next address and drop it from our list + addressport := net.JoinHostPort(t.addresses[0].String(), t.port) + if len(t.addresses) > 1 { + t.addresses = t.addresses[1:] + } else { + t.addresses = nil + } + + var desc string + if t.host_is_ip { + desc = fmt.Sprintf("%s", addressport) + } else { + desc = fmt.Sprintf("%s (%s)", addressport, t.host) + } + + log.Info("Attempting to connect to %s", desc) + + tcpsocket, err := net.DialTimeout("tcp", addressport, t.net_config.Timeout) + if err != nil { + return fmt.Errorf("Failed to connect to %s: %s", desc, err) + } + + // Now wrap in TLS if this is the "tls" transport + if t.config.transport == "tls" { + // Disable SSLv3 (mitigate POODLE vulnerability) + t.config.tls_config.MinVersion = tls.VersionTLS10 + + // Set the tlsconfig server name for server validation (required since Go 1.3) + t.config.tls_config.ServerName = t.host + + t.tlssocket = tls.Client(&transportTcpWrap{transport: t, tcpsocket: tcpsocket}, &t.config.tls_config) + t.tlssocket.SetDeadline(time.Now().Add(t.net_config.Timeout)) + err = t.tlssocket.Handshake() + if err != nil { + t.tlssocket.Close() + tcpsocket.Close() + return fmt.Errorf("TLS Handshake failure with %s: %s", desc, err) + } + + t.socket = t.tlssocket + } else { + t.socket = tcpsocket + } + + log.Info("Connected to %s", desc) + + // Signal channels + t.shutdown = make(chan interface{}, 1) + t.send_chan = make(chan []byte, 1) + // Buffer of two for recv_chan since both routines may send an error to it + // First error we get back initiates disconnect, thus we must not block routines + t.recv_chan = make(chan interface{}, 2) + t.can_send = make(chan int, 1) + + // Start with a send + t.can_send <- 1 + + t.wait.Add(2) + + // Start separate sender and receiver so we can asynchronously send and receive for max performance + // They have to be different routines too because we don't have cross-platform poll, so they will need to block + // Of course, we'll time out and check shutdown on occasion + go t.sender() + go t.receiver() + + return nil } func (t *TransportTcp) disconnect() { - if t.shutdown == nil { - return - } + if t.shutdown == nil { + return + } - // Send shutdown request - close(t.shutdown) - t.wait.Wait() - t.shutdown = nil + // Send shutdown request + close(t.shutdown) + t.wait.Wait() + t.shutdown = nil - // If tls, shutdown tls socket first - if t.config.transport == "tls" { - t.tlssocket.Close() - } + // If tls, shutdown tls socket first + if t.config.transport == "tls" { + t.tlssocket.Close() + } - t.socket.Close() + t.socket.Close() } func (t *TransportTcp) sender() { SendLoop: - for { - select { - case <-t.shutdown: - // Shutdown - break SendLoop - case msg := <-t.send_chan: - // Ask for more while we send this - t.setChan(t.can_send) - // Write deadline is managed by our net.Conn wrapper that tls will call into - _, err := t.socket.Write(msg) - if err != nil { - if net_err, ok := err.(net.Error); ok && net_err.Timeout() { - // Shutdown will have been received by the wrapper - break SendLoop - } else { - // Pass error back - t.recv_chan <- err - } - } - } - } - - t.wait.Done() + for { + select { + case <-t.shutdown: + // Shutdown + break SendLoop + case msg := <-t.send_chan: + // Ask for more while we send this + t.setChan(t.can_send) + // Write deadline is managed by our net.Conn wrapper that tls will call into + _, err := t.socket.Write(msg) + if err != nil { + if net_err, ok := err.(net.Error); ok && net_err.Timeout() { + // Shutdown will have been received by the wrapper + break SendLoop + } else { + // Pass error back + t.recv_chan <- err + } + } + } + } + + t.wait.Done() } func (t *TransportTcp) receiver() { - var err error - var shutdown bool - header := make([]byte, 8) - - for { - if err, shutdown = t.receiverRead(header); err != nil || shutdown { - break - } - - // Grab length of message - length := binary.BigEndian.Uint32(header[4:8]) - - // Sanity - if length > 1048576 { - err = fmt.Errorf("Data too large (%d)", length) - break - } - - // Allocate for full message - message := make([]byte, length) - - if err, shutdown = t.receiverRead(message); err != nil || shutdown { - break - } - - // Pass back the message - select { - case <-t.shutdown: - break - case t.recv_chan <- [][]byte{header[0:4], message}: - } - } /* loop until shutdown */ - - if err != nil { - // Pass the error back and abort - select { - case <-t.shutdown: - case t.recv_chan <- err: - } - } - - t.wait.Done() + var err error + var shutdown bool + header := make([]byte, 8) + + for { + if err, shutdown = t.receiverRead(header); err != nil || shutdown { + break + } + + // Grab length of message + length := binary.BigEndian.Uint32(header[4:8]) + + // Sanity + if length > 1048576 { + err = fmt.Errorf("Data too large (%d)", length) + break + } + + // Allocate for full message + message := make([]byte, length) + + if err, shutdown = t.receiverRead(message); err != nil || shutdown { + break + } + + // Pass back the message + select { + case <-t.shutdown: + break + case t.recv_chan <- [][]byte{header[0:4], message}: + } + } /* loop until shutdown */ + + if err != nil { + // Pass the error back and abort + select { + case <-t.shutdown: + case t.recv_chan <- err: + } + } + + t.wait.Done() } func (t *TransportTcp) receiverRead(data []byte) (error, bool) { - received := 0 + received := 0 RecvLoop: - for { - select { - case <-t.shutdown: - // Shutdown - break RecvLoop - default: - // Timeout after socket_interval_seconds, check for shutdown, and try again - t.socket.SetReadDeadline(time.Now().Add(socket_interval_seconds * time.Second)) - - length, err := t.socket.Read(data[received:]) - received += length - if err == nil || received >= len(data) { - // Success - return nil, false - } else if net_err, ok := err.(net.Error); ok && net_err.Timeout() { - // Keep trying - continue - } else { - // Pass an error back - return err, false - } - } /* select */ - } /* loop until required amount receive or shutdown */ - - return nil, true + for { + select { + case <-t.shutdown: + // Shutdown + break RecvLoop + default: + // Timeout after socket_interval_seconds, check for shutdown, and try again + t.socket.SetReadDeadline(time.Now().Add(socket_interval_seconds * time.Second)) + + length, err := t.socket.Read(data[received:]) + received += length + if err == nil || received >= len(data) { + // Success + return nil, false + } else if net_err, ok := err.(net.Error); ok && net_err.Timeout() { + // Keep trying + continue + } else { + // Pass an error back + return err, false + } + } /* select */ + } /* loop until required amount receive or shutdown */ + + return nil, true } func (t *TransportTcp) setChan(set chan<- int) { - select { - case set <- 1: - default: - } + select { + case set <- 1: + default: + } } func (t *TransportTcp) CanSend() <-chan int { - return t.can_send + return t.can_send } func (t *TransportTcp) Write(signature string, message []byte) (err error) { - var write_buffer *bytes.Buffer - write_buffer = bytes.NewBuffer(make([]byte, 0, len(signature)+4+len(message))) - - if _, err = write_buffer.Write([]byte(signature)); err != nil { - return - } - if err = binary.Write(write_buffer, binary.BigEndian, uint32(len(message))); err != nil { - return - } - if len(message) != 0 { - if _, err = write_buffer.Write(message); err != nil { - return - } - } - - t.send_chan <- write_buffer.Bytes() - return nil + var write_buffer *bytes.Buffer + write_buffer = bytes.NewBuffer(make([]byte, 0, len(signature)+4+len(message))) + + if _, err = write_buffer.Write([]byte(signature)); err != nil { + return + } + if err = binary.Write(write_buffer, binary.BigEndian, uint32(len(message))); err != nil { + return + } + if len(message) != 0 { + if _, err = write_buffer.Write(message); err != nil { + return + } + } + + t.send_chan <- write_buffer.Bytes() + return nil } func (t *TransportTcp) Read() <-chan interface{} { - return t.recv_chan + return t.recv_chan } func (t *TransportTcp) Shutdown() { - t.disconnect() + t.disconnect() } // Register the transports func init() { - rand.Seed(time.Now().UnixNano()) + rand.Seed(time.Now().UnixNano()) - core.RegisterTransport("tcp", NewTcpTransportFactory) - core.RegisterTransport("tls", NewTcpTransportFactory) + core.RegisterTransport("tcp", NewTcpTransportFactory) + core.RegisterTransport("tls", NewTcpTransportFactory) } diff --git a/src/lc-lib/transports/tcp_wrap.go b/src/lc-lib/transports/tcp_wrap.go index c78f4e3d..811e2a63 100644 --- a/src/lc-lib/transports/tcp_wrap.go +++ b/src/lc-lib/transports/tcp_wrap.go @@ -17,71 +17,71 @@ package transports import ( - "net" - "time" + "net" + "time" ) // If tls.Conn.Write ever times out it will permanently break, so we cannot use SetWriteDeadline with it directly // So we wrap the given tcpsocket and handle the SetWriteDeadline there and check shutdown signal and loop // Inside tls.Conn the Write blocks until it finishes and everyone is happy type transportTcpWrap struct { - transport *TransportTcp - tcpsocket net.Conn + transport *TransportTcp + tcpsocket net.Conn - net.Conn + net.Conn } func (w *transportTcpWrap) Read(b []byte) (int, error) { - return w.tcpsocket.Read(b) + return w.tcpsocket.Read(b) } func (w *transportTcpWrap) Write(b []byte) (n int, err error) { - length := 0 + length := 0 RetrySend: - for { - // Timeout after socket_interval_seconds, check for shutdown, and try again - w.tcpsocket.SetWriteDeadline(time.Now().Add(socket_interval_seconds * time.Second)) + for { + // Timeout after socket_interval_seconds, check for shutdown, and try again + w.tcpsocket.SetWriteDeadline(time.Now().Add(socket_interval_seconds * time.Second)) - n, err = w.tcpsocket.Write(b[length:]) - length += n - if err == nil { - return length, err - } else if net_err, ok := err.(net.Error); ok && net_err.Timeout() { - // Check for shutdown, then try again - select { - case <-w.transport.shutdown: - // Shutdown - return length, err - default: - goto RetrySend - } - } else { - return length, err - } - } /* loop forever */ + n, err = w.tcpsocket.Write(b[length:]) + length += n + if err == nil { + return length, err + } else if net_err, ok := err.(net.Error); ok && net_err.Timeout() { + // Check for shutdown, then try again + select { + case <-w.transport.shutdown: + // Shutdown + return length, err + default: + goto RetrySend + } + } else { + return length, err + } + } /* loop forever */ } func (w *transportTcpWrap) Close() error { - return w.tcpsocket.Close() + return w.tcpsocket.Close() } func (w *transportTcpWrap) LocalAddr() net.Addr { - return w.tcpsocket.LocalAddr() + return w.tcpsocket.LocalAddr() } func (w *transportTcpWrap) RemoteAddr() net.Addr { - return w.tcpsocket.RemoteAddr() + return w.tcpsocket.RemoteAddr() } func (w *transportTcpWrap) SetDeadline(t time.Time) error { - return w.tcpsocket.SetDeadline(t) + return w.tcpsocket.SetDeadline(t) } func (w *transportTcpWrap) SetReadDeadline(t time.Time) error { - return w.tcpsocket.SetReadDeadline(t) + return w.tcpsocket.SetReadDeadline(t) } func (w *transportTcpWrap) SetWriteDeadline(t time.Time) error { - return w.tcpsocket.SetWriteDeadline(t) + return w.tcpsocket.SetWriteDeadline(t) } diff --git a/src/lc-lib/transports/zmq.go b/src/lc-lib/transports/zmq.go index 6d423d94..c1789cf4 100644 --- a/src/lc-lib/transports/zmq.go +++ b/src/lc-lib/transports/zmq.go @@ -19,727 +19,727 @@ package transports import ( - "bytes" - "encoding/binary" - "errors" - "fmt" - zmq "github.com/alecthomas/gozmq" - "github.com/driskell/log-courier/src/lc-lib/core" - "net" - "regexp" - "runtime" - "sync" - "syscall" + "bytes" + "encoding/binary" + "errors" + "fmt" + zmq "github.com/alecthomas/gozmq" + "github.com/driskell/log-courier/src/lc-lib/core" + "net" + "regexp" + "runtime" + "sync" + "syscall" ) const ( - zmq_signal_output = "O" - zmq_signal_input = "I" - zmq_signal_shutdown = "S" + zmq_signal_output = "O" + zmq_signal_input = "I" + zmq_signal_shutdown = "S" ) const ( - Monitor_Part_Header = iota - Monitor_Part_Data - Monitor_Part_Extraneous + Monitor_Part_Header = iota + Monitor_Part_Data + Monitor_Part_Extraneous ) const ( - default_NetworkConfig_PeerSendQueue int64 = 2 + default_NetworkConfig_PeerSendQueue int64 = 2 ) type TransportZmqFactory struct { - transport string + transport string - CurveServerkey string `config:"curve server key"` - CurvePublickey string `config:"curve public key"` - CurveSecretkey string `config:"curve secret key"` + CurveServerkey string `config:"curve server key"` + CurvePublickey string `config:"curve public key"` + CurveSecretkey string `config:"curve secret key"` - PeerSendQueue int64 + PeerSendQueue int64 - hostport_re *regexp.Regexp + hostport_re *regexp.Regexp } type TransportZmq struct { - config *TransportZmqFactory - net_config *core.NetworkConfig - context *zmq.Context - dealer *zmq.Socket - monitor *zmq.Socket - poll_items []zmq.PollItem - send_buff *ZMQMessage - recv_buff [][]byte - recv_body bool - event ZMQEvent - ready bool - - wait sync.WaitGroup - - bridge_chan chan []byte - - send_chan chan *ZMQMessage - recv_chan chan interface{} - recv_bridge_chan chan interface{} - - can_send chan int + config *TransportZmqFactory + net_config *core.NetworkConfig + context *zmq.Context + dealer *zmq.Socket + monitor *zmq.Socket + poll_items []zmq.PollItem + send_buff *ZMQMessage + recv_buff [][]byte + recv_body bool + event ZMQEvent + ready bool + + wait sync.WaitGroup + + bridge_chan chan []byte + + send_chan chan *ZMQMessage + recv_chan chan interface{} + recv_bridge_chan chan interface{} + + can_send chan int } type ZMQMessage struct { - part []byte - final bool + part []byte + final bool } type ZMQEvent struct { - part int - event zmq.Event - val int32 - data string + part int + event zmq.Event + val int32 + data string } func (e *ZMQEvent) Log() { - switch e.event { - case zmq.EVENT_CONNECTED: - if e.data == "" { - log.Info("Connected") - } else { - log.Info("Connected to %s", e.data) - } - case zmq.EVENT_CONNECT_DELAYED: - // Don't log anything for this - case zmq.EVENT_CONNECT_RETRIED: - if e.data == "" { - log.Info("Attempting to connect") - } else { - log.Info("Attempting to connect to %s", e.data) - } - case zmq.EVENT_CLOSED: - if e.data == "" { - log.Error("Connection closed") - } else { - log.Error("Connection to %s closed", e.data) - } - case zmq.EVENT_DISCONNECTED: - if e.data == "" { - log.Error("Lost connection") - } else { - log.Error("Lost connection to %s", e.data) - } - default: - log.Debug("Unknown monitor message (event:%d, val:%d, data:[% X])", e.event, e.val, e.data) - } + switch e.event { + case zmq.EVENT_CONNECTED: + if e.data == "" { + log.Info("Connected") + } else { + log.Info("Connected to %s", e.data) + } + case zmq.EVENT_CONNECT_DELAYED: + // Don't log anything for this + case zmq.EVENT_CONNECT_RETRIED: + if e.data == "" { + log.Info("Attempting to connect") + } else { + log.Info("Attempting to connect to %s", e.data) + } + case zmq.EVENT_CLOSED: + if e.data == "" { + log.Error("Connection closed") + } else { + log.Error("Connection to %s closed", e.data) + } + case zmq.EVENT_DISCONNECTED: + if e.data == "" { + log.Error("Lost connection") + } else { + log.Error("Lost connection to %s", e.data) + } + default: + log.Debug("Unknown monitor message (event:%d, val:%d, data:[% X])", e.event, e.val, e.data) + } } type TransportZmqRegistrar struct { } func NewZmqTransportFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.TransportFactory, error) { - var err error - - ret := &TransportZmqFactory{ - transport: name, - hostport_re: regexp.MustCompile(`^\[?([^]]+)\]?:([0-9]+)$`), - } - - if name == "zmq" { - if err = config.PopulateConfig(ret, config_path, unused); err != nil { - return nil, err - } - - if err := ret.processConfig(config_path); err != nil { - return nil, err - } - - return ret, nil - } - - // Don't allow curve settings - if _, ok := unused["CurveServerkey"]; ok { - goto CheckUnused - } - if _, ok := unused["CurvePublickey"]; ok { - goto CheckUnused - } - if _, ok := unused["CurveSecretkey"]; ok { - goto CheckUnused - } - - if err = config.PopulateConfig(ret, config_path, unused); err != nil { - return nil, err - } - - if ret.PeerSendQueue == 0 { - ret.PeerSendQueue = default_NetworkConfig_PeerSendQueue - } + var err error + + ret := &TransportZmqFactory{ + transport: name, + hostport_re: regexp.MustCompile(`^\[?([^]]+)\]?:([0-9]+)$`), + } + + if name == "zmq" { + if err = config.PopulateConfig(ret, config_path, unused); err != nil { + return nil, err + } + + if err := ret.processConfig(config_path); err != nil { + return nil, err + } + + return ret, nil + } + + // Don't allow curve settings + if _, ok := unused["CurveServerkey"]; ok { + goto CheckUnused + } + if _, ok := unused["CurvePublickey"]; ok { + goto CheckUnused + } + if _, ok := unused["CurveSecretkey"]; ok { + goto CheckUnused + } + + if err = config.PopulateConfig(ret, config_path, unused); err != nil { + return nil, err + } + + if ret.PeerSendQueue == 0 { + ret.PeerSendQueue = default_NetworkConfig_PeerSendQueue + } CheckUnused: - if err := config.ReportUnusedConfig(config_path, unused); err != nil { - return nil, err - } + if err := config.ReportUnusedConfig(config_path, unused); err != nil { + return nil, err + } - return ret, nil + return ret, nil } func (f *TransportZmqFactory) NewTransport(config *core.NetworkConfig) (core.Transport, error) { - return &TransportZmq{config: f, net_config: config}, nil + return &TransportZmq{config: f, net_config: config}, nil } func (t *TransportZmq) ReloadConfig(new_net_config *core.NetworkConfig) int { - // Check we can grab new ZMQ config to compare, if not force transport reinit - new_config, ok := new_net_config.TransportFactory.(*TransportZmqFactory) - if !ok { - return core.Reload_Transport - } + // Check we can grab new ZMQ config to compare, if not force transport reinit + new_config, ok := new_net_config.TransportFactory.(*TransportZmqFactory) + if !ok { + return core.Reload_Transport + } - if new_config.CurveServerkey != t.config.CurveServerkey || new_config.CurvePublickey != t.config.CurvePublickey || new_config.CurveSecretkey != t.config.CurveSecretkey { - return core.Reload_Transport - } + if new_config.CurveServerkey != t.config.CurveServerkey || new_config.CurvePublickey != t.config.CurvePublickey || new_config.CurveSecretkey != t.config.CurveSecretkey { + return core.Reload_Transport + } - // Publisher handles changes to net_config, but ensure we store the latest in case it asks for a reconnect - t.net_config = new_net_config + // Publisher handles changes to net_config, but ensure we store the latest in case it asks for a reconnect + t.net_config = new_net_config - return core.Reload_None + return core.Reload_None } func (t *TransportZmq) Init() (err error) { - // Initialise once for ZMQ - if t.ready { - // If already initialised, ask if we can send again - t.bridge_chan <- []byte(zmq_signal_output) - return nil - } - - t.context, err = zmq.NewContext() - if err != nil { - return fmt.Errorf("Failed to create ZMQ context: %s", err) - } - defer func() { - if err != nil { - t.context.Close() - } - }() - - // Control sockets to connect bridge to poller - bridge_in, err := t.context.NewSocket(zmq.PUSH) - if err != nil { - return fmt.Errorf("Failed to create internal ZMQ PUSH socket: %s", err) - } - defer func() { - if err != nil { - bridge_in.Close() - } - }() - - if err = bridge_in.Bind("inproc://notify"); err != nil { - return fmt.Errorf("Failed to bind internal ZMQ PUSH socket: %s", err) - } - - bridge_out, err := t.context.NewSocket(zmq.PULL) - if err != nil { - return fmt.Errorf("Failed to create internal ZMQ PULL socket: %s", err) - } - defer func() { - if err != nil { - bridge_out.Close() - } - }() - - if err = bridge_out.Connect("inproc://notify"); err != nil { - return fmt.Errorf("Failed to connect internal ZMQ PULL socket: %s", err) - } - - // Outbound dealer socket will fair-queue load balance amongst peers - if t.dealer, err = t.context.NewSocket(zmq.DEALER); err != nil { - return fmt.Errorf("Failed to create ZMQ DEALER socket: %s", err) - } - defer func() { - if err != nil { - t.dealer.Close() - } - }() - - if err = t.dealer.Monitor("inproc://monitor", zmq.EVENT_ALL); err != nil { - return fmt.Errorf("Failed to bind DEALER socket to monitor: %s", err) - } - - if err = t.configureSocket(); err != nil { - return fmt.Errorf("Failed to configure DEALER socket: %s", err) - } - - // Configure reconnect interval - if err = t.dealer.SetReconnectIvlMax(t.net_config.Reconnect); err != nil { - return fmt.Errorf("Failed to set ZMQ reconnect interval: %s", err) - } - - // We should not LINGER. If we do, socket Close and also context Close will - // block infinitely until the message queue is flushed. Set to 0 to discard - // all messages immediately when we call Close - if err = t.dealer.SetLinger(0); err != nil { - return fmt.Errorf("Failed to set ZMQ linger period: %s", err) - } - - // Set the outbound queue - if err = t.dealer.SetSndHWM(int(t.config.PeerSendQueue)); err != nil { - return fmt.Errorf("Failed to set ZMQ send highwater: %s", err) - } - - // Monitor socket - if t.monitor, err = t.context.NewSocket(zmq.PULL); err != nil { - return fmt.Errorf("Failed to create monitor ZMQ PULL socket: %s", err) - } - defer func() { - if err != nil { - t.monitor.Close() - } - }() - - if err = t.monitor.Connect("inproc://monitor"); err != nil { - return fmt.Errorf("Failed to connect monitor ZMQ PULL socket: %s", err) - } - - // Register endpoints - endpoints := 0 - for _, hostport := range t.net_config.Servers { - submatch := t.config.hostport_re.FindSubmatch([]byte(hostport)) - if submatch == nil { - log.Warning("Invalid host:port given: %s", hostport) - continue - } - - // Lookup the server in DNS (if this is IP it will implicitly return) - host := string(submatch[1]) - port := string(submatch[2]) - addresses, err := net.LookupHost(host) - if err != nil { - log.Warning("DNS lookup failure \"%s\": %s", host, err) - continue - } - - // Register each address - for _, address := range addresses { - addressport := net.JoinHostPort(address, port) - - if err = t.dealer.Connect("tcp://" + addressport); err != nil { - log.Warning("Failed to register %s (%s) with ZMQ, skipping", addressport, host) - continue - } - - log.Info("Registered %s (%s) with ZMQ", addressport, host) - endpoints++ - } - } - - if endpoints == 0 { - return errors.New("Failed to register any of the specified endpoints.") - } - - major, minor, patch := zmq.Version() - log.Info("libzmq version %d.%d.%d", major, minor, patch) - - // Signal channels - t.bridge_chan = make(chan []byte, 1) - t.send_chan = make(chan *ZMQMessage, 2) - t.recv_chan = make(chan interface{}, 1) - t.recv_bridge_chan = make(chan interface{}, 1) - t.can_send = make(chan int, 1) - - // Waiter we use to wait for shutdown - t.wait.Add(2) - - // Bridge between channels and ZMQ - go t.bridge(bridge_in) - - // The poller - go t.poller(bridge_out) - - t.ready = true - t.send_buff = nil - t.recv_buff = nil - t.recv_body = false - - return nil + // Initialise once for ZMQ + if t.ready { + // If already initialised, ask if we can send again + t.bridge_chan <- []byte(zmq_signal_output) + return nil + } + + t.context, err = zmq.NewContext() + if err != nil { + return fmt.Errorf("Failed to create ZMQ context: %s", err) + } + defer func() { + if err != nil { + t.context.Close() + } + }() + + // Control sockets to connect bridge to poller + bridge_in, err := t.context.NewSocket(zmq.PUSH) + if err != nil { + return fmt.Errorf("Failed to create internal ZMQ PUSH socket: %s", err) + } + defer func() { + if err != nil { + bridge_in.Close() + } + }() + + if err = bridge_in.Bind("inproc://notify"); err != nil { + return fmt.Errorf("Failed to bind internal ZMQ PUSH socket: %s", err) + } + + bridge_out, err := t.context.NewSocket(zmq.PULL) + if err != nil { + return fmt.Errorf("Failed to create internal ZMQ PULL socket: %s", err) + } + defer func() { + if err != nil { + bridge_out.Close() + } + }() + + if err = bridge_out.Connect("inproc://notify"); err != nil { + return fmt.Errorf("Failed to connect internal ZMQ PULL socket: %s", err) + } + + // Outbound dealer socket will fair-queue load balance amongst peers + if t.dealer, err = t.context.NewSocket(zmq.DEALER); err != nil { + return fmt.Errorf("Failed to create ZMQ DEALER socket: %s", err) + } + defer func() { + if err != nil { + t.dealer.Close() + } + }() + + if err = t.dealer.Monitor("inproc://monitor", zmq.EVENT_ALL); err != nil { + return fmt.Errorf("Failed to bind DEALER socket to monitor: %s", err) + } + + if err = t.configureSocket(); err != nil { + return fmt.Errorf("Failed to configure DEALER socket: %s", err) + } + + // Configure reconnect interval + if err = t.dealer.SetReconnectIvlMax(t.net_config.Reconnect); err != nil { + return fmt.Errorf("Failed to set ZMQ reconnect interval: %s", err) + } + + // We should not LINGER. If we do, socket Close and also context Close will + // block infinitely until the message queue is flushed. Set to 0 to discard + // all messages immediately when we call Close + if err = t.dealer.SetLinger(0); err != nil { + return fmt.Errorf("Failed to set ZMQ linger period: %s", err) + } + + // Set the outbound queue + if err = t.dealer.SetSndHWM(int(t.config.PeerSendQueue)); err != nil { + return fmt.Errorf("Failed to set ZMQ send highwater: %s", err) + } + + // Monitor socket + if t.monitor, err = t.context.NewSocket(zmq.PULL); err != nil { + return fmt.Errorf("Failed to create monitor ZMQ PULL socket: %s", err) + } + defer func() { + if err != nil { + t.monitor.Close() + } + }() + + if err = t.monitor.Connect("inproc://monitor"); err != nil { + return fmt.Errorf("Failed to connect monitor ZMQ PULL socket: %s", err) + } + + // Register endpoints + endpoints := 0 + for _, hostport := range t.net_config.Servers { + submatch := t.config.hostport_re.FindSubmatch([]byte(hostport)) + if submatch == nil { + log.Warning("Invalid host:port given: %s", hostport) + continue + } + + // Lookup the server in DNS (if this is IP it will implicitly return) + host := string(submatch[1]) + port := string(submatch[2]) + addresses, err := net.LookupHost(host) + if err != nil { + log.Warning("DNS lookup failure \"%s\": %s", host, err) + continue + } + + // Register each address + for _, address := range addresses { + addressport := net.JoinHostPort(address, port) + + if err = t.dealer.Connect("tcp://" + addressport); err != nil { + log.Warning("Failed to register %s (%s) with ZMQ, skipping", addressport, host) + continue + } + + log.Info("Registered %s (%s) with ZMQ", addressport, host) + endpoints++ + } + } + + if endpoints == 0 { + return errors.New("Failed to register any of the specified endpoints.") + } + + major, minor, patch := zmq.Version() + log.Info("libzmq version %d.%d.%d", major, minor, patch) + + // Signal channels + t.bridge_chan = make(chan []byte, 1) + t.send_chan = make(chan *ZMQMessage, 2) + t.recv_chan = make(chan interface{}, 1) + t.recv_bridge_chan = make(chan interface{}, 1) + t.can_send = make(chan int, 1) + + // Waiter we use to wait for shutdown + t.wait.Add(2) + + // Bridge between channels and ZMQ + go t.bridge(bridge_in) + + // The poller + go t.poller(bridge_out) + + t.ready = true + t.send_buff = nil + t.recv_buff = nil + t.recv_body = false + + return nil } func (t *TransportZmq) bridge(bridge_in *zmq.Socket) { - var message interface{} + var message interface{} - // Wait on channel, passing into socket - // This keeps the socket in a single thread, otherwise we have to lock the entire publisher - runtime.LockOSThread() + // Wait on channel, passing into socket + // This keeps the socket in a single thread, otherwise we have to lock the entire publisher + runtime.LockOSThread() BridgeLoop: - for { - select { - case notify := <-t.bridge_chan: - bridge_in.Send(notify, 0) - - // Shutdown? - if string(notify) == zmq_signal_shutdown { - break BridgeLoop - } - case message = <-t.recv_bridge_chan: - // The reason we flush recv through the bridge and not directly to recv_chan is so that if - // the poller was quick and had to cache a receive as the channel was full, it will stop - // polling - flushing through bridge allows us to signal poller to start polling again - // It is not the publisher's responsibility to do this, and TLS wouldn't need it - bridge_in.Send([]byte(zmq_signal_input), 0) - - // Keep trying to forward on the message - ForwardLoop: - for { - select { - case notify := <-t.bridge_chan: - bridge_in.Send(notify, 0) - - // Shutdown? - if string(notify) == zmq_signal_shutdown { - break BridgeLoop - } - case t.recv_chan <- message: - break ForwardLoop - } - } - } - } - - // We should linger by default to ensure shutdown is transmitted - bridge_in.Close() - runtime.UnlockOSThread() - t.wait.Done() + for { + select { + case notify := <-t.bridge_chan: + bridge_in.Send(notify, 0) + + // Shutdown? + if string(notify) == zmq_signal_shutdown { + break BridgeLoop + } + case message = <-t.recv_bridge_chan: + // The reason we flush recv through the bridge and not directly to recv_chan is so that if + // the poller was quick and had to cache a receive as the channel was full, it will stop + // polling - flushing through bridge allows us to signal poller to start polling again + // It is not the publisher's responsibility to do this, and TLS wouldn't need it + bridge_in.Send([]byte(zmq_signal_input), 0) + + // Keep trying to forward on the message + ForwardLoop: + for { + select { + case notify := <-t.bridge_chan: + bridge_in.Send(notify, 0) + + // Shutdown? + if string(notify) == zmq_signal_shutdown { + break BridgeLoop + } + case t.recv_chan <- message: + break ForwardLoop + } + } + } + } + + // We should linger by default to ensure shutdown is transmitted + bridge_in.Close() + runtime.UnlockOSThread() + t.wait.Done() } func (t *TransportZmq) poller(bridge_out *zmq.Socket) { - // ZMQ sockets are not thread-safe, so we have to send/receive on same thread - // Thus, we cannot use a sender/receiver thread pair like we can with TLS so we use a single threaded poller instead - // In order to asynchronously send and receive we just poll and do necessary actions - - // When data is ready to send we'll get a channel ping, that is bridged to ZMQ so we can then send data - // For receiving, we receive here and bridge it to the channels, then receive more once that's through - runtime.LockOSThread() - - t.poll_items = make([]zmq.PollItem, 3) - - // Listen always on bridge - t.poll_items[0].Socket = bridge_out - t.poll_items[0].Events = zmq.POLLIN | zmq.POLLOUT - - // Always check for input on dealer - but also initially check for OUT so we can flag send is ready - t.poll_items[1].Socket = t.dealer - t.poll_items[1].Events = zmq.POLLIN | zmq.POLLOUT - - // Always listen for input on monitor - t.poll_items[2].Socket = t.monitor - t.poll_items[2].Events = zmq.POLLIN - - for { - // Poll for events - if _, err := zmq.Poll(t.poll_items, -1); err != nil { - // Retry on EINTR - if err == syscall.EINTR { - continue - } - - // Failure - t.recv_chan <- fmt.Errorf("zmq.Poll failure %s", err) - break - } - - // Process control channel - if t.poll_items[0].REvents&zmq.POLLIN != 0 { - if !t.processControlIn(bridge_out) { - break - } - } - - // Process dealer receive - if t.poll_items[1].REvents&zmq.POLLIN != 0 { - if !t.processDealerIn() { - break - } - } - - // Process dealer send - if t.poll_items[1].REvents&zmq.POLLOUT != 0 { - if !t.processDealerOut() { - break - } - } - - // Process monitor receive - if t.poll_items[2].REvents&zmq.POLLIN != 0 { - if !t.processMonitorIn() { - break - } - } - } - - bridge_out.Close() - runtime.UnlockOSThread() - t.wait.Done() + // ZMQ sockets are not thread-safe, so we have to send/receive on same thread + // Thus, we cannot use a sender/receiver thread pair like we can with TLS so we use a single threaded poller instead + // In order to asynchronously send and receive we just poll and do necessary actions + + // When data is ready to send we'll get a channel ping, that is bridged to ZMQ so we can then send data + // For receiving, we receive here and bridge it to the channels, then receive more once that's through + runtime.LockOSThread() + + t.poll_items = make([]zmq.PollItem, 3) + + // Listen always on bridge + t.poll_items[0].Socket = bridge_out + t.poll_items[0].Events = zmq.POLLIN | zmq.POLLOUT + + // Always check for input on dealer - but also initially check for OUT so we can flag send is ready + t.poll_items[1].Socket = t.dealer + t.poll_items[1].Events = zmq.POLLIN | zmq.POLLOUT + + // Always listen for input on monitor + t.poll_items[2].Socket = t.monitor + t.poll_items[2].Events = zmq.POLLIN + + for { + // Poll for events + if _, err := zmq.Poll(t.poll_items, -1); err != nil { + // Retry on EINTR + if err == syscall.EINTR { + continue + } + + // Failure + t.recv_chan <- fmt.Errorf("zmq.Poll failure %s", err) + break + } + + // Process control channel + if t.poll_items[0].REvents&zmq.POLLIN != 0 { + if !t.processControlIn(bridge_out) { + break + } + } + + // Process dealer receive + if t.poll_items[1].REvents&zmq.POLLIN != 0 { + if !t.processDealerIn() { + break + } + } + + // Process dealer send + if t.poll_items[1].REvents&zmq.POLLOUT != 0 { + if !t.processDealerOut() { + break + } + } + + // Process monitor receive + if t.poll_items[2].REvents&zmq.POLLIN != 0 { + if !t.processMonitorIn() { + break + } + } + } + + bridge_out.Close() + runtime.UnlockOSThread() + t.wait.Done() } func (t *TransportZmq) processControlIn(bridge_out *zmq.Socket) (ok bool) { - for { - RetryControl: - msg, err := bridge_out.Recv(zmq.DONTWAIT) - if err != nil { - switch err { - case syscall.EINTR: - // Try again - goto RetryControl - case syscall.EAGAIN: - // No more messages - return true - } - - // Failure - t.recv_chan <- fmt.Errorf("Pull zmq.Socket.Recv failure %s", err) - return - } - - switch string(msg) { - case zmq_signal_output: - // Start polling for send - t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLOUT - case zmq_signal_input: - // If we staged a receive, process that - if t.recv_buff != nil { - select { - case t.recv_bridge_chan <- t.recv_buff: - t.recv_buff = nil - - // Start polling for receive - t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLIN - default: - // Do nothing, we were asked for receive but channel is already full - } - } else { - // Start polling for receive - t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLIN - } - case zmq_signal_shutdown: - // Shutdown - return - } - } + for { + RetryControl: + msg, err := bridge_out.Recv(zmq.DONTWAIT) + if err != nil { + switch err { + case syscall.EINTR: + // Try again + goto RetryControl + case syscall.EAGAIN: + // No more messages + return true + } + + // Failure + t.recv_chan <- fmt.Errorf("Pull zmq.Socket.Recv failure %s", err) + return + } + + switch string(msg) { + case zmq_signal_output: + // Start polling for send + t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLOUT + case zmq_signal_input: + // If we staged a receive, process that + if t.recv_buff != nil { + select { + case t.recv_bridge_chan <- t.recv_buff: + t.recv_buff = nil + + // Start polling for receive + t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLIN + default: + // Do nothing, we were asked for receive but channel is already full + } + } else { + // Start polling for receive + t.poll_items[1].Events = t.poll_items[1].Events | zmq.POLLIN + } + case zmq_signal_shutdown: + // Shutdown + return + } + } } func (t *TransportZmq) processDealerOut() (ok bool) { - var sent_one bool - - // Something in the staging buffer? - if t.send_buff != nil { - sent, s_ok := t.dealerSend(t.send_buff) - if !s_ok { - return - } - if !sent { - ok = true - return - } - - t.send_buff = nil - sent_one = true - } - - // Send messages from channel + var sent_one bool + + // Something in the staging buffer? + if t.send_buff != nil { + sent, s_ok := t.dealerSend(t.send_buff) + if !s_ok { + return + } + if !sent { + ok = true + return + } + + t.send_buff = nil + sent_one = true + } + + // Send messages from channel LoopSend: - for { - select { - case msg := <-t.send_chan: - sent, s_ok := t.dealerSend(msg) - if !s_ok { - return - } - if !sent { - t.send_buff = msg - break - } - - sent_one = true - default: - break LoopSend - } - } - - if sent_one { - // We just sent something, check POLLOUT still active before signalling we can send more - // TODO: Check why Events() is returning uint64 instead of PollEvents - // TODO: This is broken and actually returns an error - /*if events, _ := t.dealer.Events(); zmq.PollEvents(events)&zmq.POLLOUT != 0 { - t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLOUT - t.setChan(t.can_send) - }*/ - } else { - t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLOUT - t.setChan(t.can_send) - } - - ok = true - return + for { + select { + case msg := <-t.send_chan: + sent, s_ok := t.dealerSend(msg) + if !s_ok { + return + } + if !sent { + t.send_buff = msg + break + } + + sent_one = true + default: + break LoopSend + } + } + + if sent_one { + // We just sent something, check POLLOUT still active before signalling we can send more + // TODO: Check why Events() is returning uint64 instead of PollEvents + // TODO: This is broken and actually returns an error + /*if events, _ := t.dealer.Events(); zmq.PollEvents(events)&zmq.POLLOUT != 0 { + t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLOUT + t.setChan(t.can_send) + }*/ + } else { + t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLOUT + t.setChan(t.can_send) + } + + ok = true + return } func (t *TransportZmq) dealerSend(msg *ZMQMessage) (sent bool, ok bool) { - var err error + var err error RetrySend: - if msg.final { - err = t.dealer.Send(msg.part, zmq.DONTWAIT) - } else { - err = t.dealer.Send(msg.part, zmq.DONTWAIT|zmq.SNDMORE) - } - if err != nil { - switch err { - case syscall.EINTR: - // Try again - goto RetrySend - case syscall.EAGAIN: - // No more messages - ok = true - return - } - - // Failure - t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.Send failure %s", err) - return - } - - sent = true - ok = true - return + if msg.final { + err = t.dealer.Send(msg.part, zmq.DONTWAIT) + } else { + err = t.dealer.Send(msg.part, zmq.DONTWAIT|zmq.SNDMORE) + } + if err != nil { + switch err { + case syscall.EINTR: + // Try again + goto RetrySend + case syscall.EAGAIN: + // No more messages + ok = true + return + } + + // Failure + t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.Send failure %s", err) + return + } + + sent = true + ok = true + return } func (t *TransportZmq) processDealerIn() (ok bool) { - for { - // Bring in the messages - RetryRecv: - data, err := t.dealer.Recv(zmq.DONTWAIT) - if err != nil { - switch err { - case syscall.EINTR: - // Try again - goto RetryRecv - case syscall.EAGAIN: - // No more messages - ok = true - return - } - - // Failure - t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.Recv failure %s", err) - return - } - - more, err := t.dealer.RcvMore() - if err != nil { - // Failure - t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.RcvMore failure %s", err) - return - } - - // Sanity check, and don't save until empty message - if len(data) == 0 && more { - // Message separator, start returning - t.recv_body = true - continue - } else if more || !t.recv_body { - // Ignore all but last message - continue - } - - t.recv_body = false - - // Last message and receiving, validate it first - if len(data) < 8 { - log.Warning("Skipping invalid message: not enough data") - continue - } - - length := binary.BigEndian.Uint32(data[4:8]) - if length > 1048576 { - log.Warning("Skipping invalid message: data too large (%d)", length) - continue - } else if length != uint32(len(data))-8 { - log.Warning("Skipping invalid message: data has invalid length (%d != %d)", len(data)-8, length) - continue - } - - message := [][]byte{data[0:4], data[8:]} - - // Bridge to channels - select { - case t.recv_bridge_chan <- message: - default: - // We filled the channel, stop polling until we pull something off of it and stage the recv - t.recv_buff = message - t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLIN - ok = true - return - } - } + for { + // Bring in the messages + RetryRecv: + data, err := t.dealer.Recv(zmq.DONTWAIT) + if err != nil { + switch err { + case syscall.EINTR: + // Try again + goto RetryRecv + case syscall.EAGAIN: + // No more messages + ok = true + return + } + + // Failure + t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.Recv failure %s", err) + return + } + + more, err := t.dealer.RcvMore() + if err != nil { + // Failure + t.recv_chan <- fmt.Errorf("Dealer zmq.Socket.RcvMore failure %s", err) + return + } + + // Sanity check, and don't save until empty message + if len(data) == 0 && more { + // Message separator, start returning + t.recv_body = true + continue + } else if more || !t.recv_body { + // Ignore all but last message + continue + } + + t.recv_body = false + + // Last message and receiving, validate it first + if len(data) < 8 { + log.Warning("Skipping invalid message: not enough data") + continue + } + + length := binary.BigEndian.Uint32(data[4:8]) + if length > 1048576 { + log.Warning("Skipping invalid message: data too large (%d)", length) + continue + } else if length != uint32(len(data))-8 { + log.Warning("Skipping invalid message: data has invalid length (%d != %d)", len(data)-8, length) + continue + } + + message := [][]byte{data[0:4], data[8:]} + + // Bridge to channels + select { + case t.recv_bridge_chan <- message: + default: + // We filled the channel, stop polling until we pull something off of it and stage the recv + t.recv_buff = message + t.poll_items[1].Events = t.poll_items[1].Events ^ zmq.POLLIN + ok = true + return + } + } } func (t *TransportZmq) setChan(set chan int) { - select { - case set <- 1: - default: - } + select { + case set <- 1: + default: + } } func (t *TransportZmq) CanSend() <-chan int { - return t.can_send + return t.can_send } func (t *TransportZmq) Write(signature string, message []byte) (err error) { - var write_buffer *bytes.Buffer - write_buffer = bytes.NewBuffer(make([]byte, 0, len(signature)+4+len(message))) - - if _, err = write_buffer.Write([]byte(signature)); err != nil { - return - } - if err = binary.Write(write_buffer, binary.BigEndian, uint32(len(message))); err != nil { - return - } - if len(message) != 0 { - if _, err = write_buffer.Write(message); err != nil { - return - } - } - - // TODO: Fix regression where we could pend all payloads on a single ZMQ peer - // We should switch to full freelance pattern - ROUTER-to-ROUTER - // For this to work with ZMQ 3.2 we need to force identities on the server - // as the connection IP:Port and use those identities as available send pool - // For ZMQ 4+ we can skip using those and enable ZMQ_ROUTER_PROBE which sends - // an empty message on connection - server should respond with empty message - // which will allow us to populate identity list that way. - // ZMQ 4 approach is rigid as it means we don't need to rely on fixed - // identities - - t.send_chan <- &ZMQMessage{part: []byte(""), final: false} - t.send_chan <- &ZMQMessage{part: write_buffer.Bytes(), final: true} - - // Ask for send to start - t.bridge_chan <- []byte(zmq_signal_output) - return nil + var write_buffer *bytes.Buffer + write_buffer = bytes.NewBuffer(make([]byte, 0, len(signature)+4+len(message))) + + if _, err = write_buffer.Write([]byte(signature)); err != nil { + return + } + if err = binary.Write(write_buffer, binary.BigEndian, uint32(len(message))); err != nil { + return + } + if len(message) != 0 { + if _, err = write_buffer.Write(message); err != nil { + return + } + } + + // TODO: Fix regression where we could pend all payloads on a single ZMQ peer + // We should switch to full freelance pattern - ROUTER-to-ROUTER + // For this to work with ZMQ 3.2 we need to force identities on the server + // as the connection IP:Port and use those identities as available send pool + // For ZMQ 4+ we can skip using those and enable ZMQ_ROUTER_PROBE which sends + // an empty message on connection - server should respond with empty message + // which will allow us to populate identity list that way. + // ZMQ 4 approach is rigid as it means we don't need to rely on fixed + // identities + + t.send_chan <- &ZMQMessage{part: []byte(""), final: false} + t.send_chan <- &ZMQMessage{part: write_buffer.Bytes(), final: true} + + // Ask for send to start + t.bridge_chan <- []byte(zmq_signal_output) + return nil } func (t *TransportZmq) Read() <-chan interface{} { - return t.recv_chan + return t.recv_chan } func (t *TransportZmq) Shutdown() { - if t.ready { - // Send shutdown request - t.bridge_chan <- []byte(zmq_signal_shutdown) - t.wait.Wait() - t.dealer.Close() - t.monitor.Close() - t.context.Close() - t.ready = false - } + if t.ready { + // Send shutdown request + t.bridge_chan <- []byte(zmq_signal_shutdown) + t.wait.Wait() + t.dealer.Close() + t.monitor.Close() + t.context.Close() + t.ready = false + } } // Register the transport func init() { - core.RegisterTransport("plainzmq", NewZmqTransportFactory) + core.RegisterTransport("plainzmq", NewZmqTransportFactory) } diff --git a/src/lc-lib/transports/zmq3.go b/src/lc-lib/transports/zmq3.go index f131b07e..543bae9f 100644 --- a/src/lc-lib/transports/zmq3.go +++ b/src/lc-lib/transports/zmq3.go @@ -31,75 +31,75 @@ struct zmq_event_t_wrap { import "C" import ( - "fmt" - zmq "github.com/alecthomas/gozmq" - "syscall" - "unsafe" + "fmt" + zmq "github.com/alecthomas/gozmq" + "syscall" + "unsafe" ) func (f *TransportZmqFactory) processConfig(config_path string) (err error) { - return nil + return nil } func (t *TransportZmq) configureSocket() error { - return nil + return nil } // Process ZMQ 3.2.x monitor messages // http://api.zeromq.org/3-2:zmq-socket-monitor func (t *TransportZmq) processMonitorIn() (ok bool) { - for { - // Bring in the messages - RetryRecv: - data, err := t.monitor.Recv(zmq.DONTWAIT) - if err != nil { - switch err { - case syscall.EINTR: - // Try again - goto RetryRecv - case syscall.EAGAIN: - // No more messages - ok = true - return - } + for { + // Bring in the messages + RetryRecv: + data, err := t.monitor.Recv(zmq.DONTWAIT) + if err != nil { + switch err { + case syscall.EINTR: + // Try again + goto RetryRecv + case syscall.EAGAIN: + // No more messages + ok = true + return + } - // Failure - t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.Recv failure %s", err) - return - } + // Failure + t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.Recv failure %s", err) + return + } - switch t.event.part { - case Monitor_Part_Header: - event := (*C.struct_zmq_event_t_wrap)(unsafe.Pointer(&data[0])) - t.event.event = zmq.Event(event.event) - if event.addr == nil { - t.event.data = "" - } else { - // TODO: Fix this - data has been feed by zmq_msg_close! - //t.event.data = C.GoString(event.addr) - t.event.data = "" - } - t.event.val = int32(event.fd) - t.event.Log() - default: - log.Debug("Extraneous data in monitor message. Silently discarding.") - continue - } + switch t.event.part { + case Monitor_Part_Header: + event := (*C.struct_zmq_event_t_wrap)(unsafe.Pointer(&data[0])) + t.event.event = zmq.Event(event.event) + if event.addr == nil { + t.event.data = "" + } else { + // TODO: Fix this - data has been feed by zmq_msg_close! + //t.event.data = C.GoString(event.addr) + t.event.data = "" + } + t.event.val = int32(event.fd) + t.event.Log() + default: + log.Debug("Extraneous data in monitor message. Silently discarding.") + continue + } - more, err := t.monitor.RcvMore() - if err != nil { - // Failure - t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.RcvMore failure %s", err) - return - } + more, err := t.monitor.RcvMore() + if err != nil { + // Failure + t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.RcvMore failure %s", err) + return + } - if !more { - t.event.part = Monitor_Part_Header - continue - } + if !more { + t.event.part = Monitor_Part_Header + continue + } - if t.event.part <= Monitor_Part_Data { - t.event.part++ - } - } + if t.event.part <= Monitor_Part_Data { + t.event.part++ + } + } } diff --git a/src/lc-lib/transports/zmq4.go b/src/lc-lib/transports/zmq4.go index 79e9b54f..10010c08 100644 --- a/src/lc-lib/transports/zmq4.go +++ b/src/lc-lib/transports/zmq4.go @@ -26,132 +26,132 @@ package transports import "C" import ( - "encoding/binary" - "fmt" - zmq "github.com/alecthomas/gozmq" - "github.com/driskell/log-courier/src/lc-lib/core" - "syscall" - "unsafe" + "encoding/binary" + "fmt" + zmq "github.com/alecthomas/gozmq" + "github.com/driskell/log-courier/src/lc-lib/core" + "syscall" + "unsafe" ) func (f *TransportZmqFactory) processConfig(config_path string) (err error) { - if len(f.CurveServerkey) == 0 { - return fmt.Errorf("Option %scurve server key is required", config_path) - } else if len(f.CurveServerkey) != 40 || !z85Validate(f.CurveServerkey) { - return fmt.Errorf("Option %scurve server key must be a valid 40 character Z85 encoded string", config_path) - } - if len(f.CurvePublickey) == 0 { - return fmt.Errorf("Option %scurve public key is required", config_path) - } else if len(f.CurvePublickey) != 40 || !z85Validate(f.CurvePublickey) { - return fmt.Errorf("Option %scurve public key must be a valid 40 character Z85 encoded string", config_path) - } - if len(f.CurveSecretkey) == 0 { - return fmt.Errorf("Option %scurve secret key is required", config_path) - } else if len(f.CurveSecretkey) != 40 || !z85Validate(f.CurveSecretkey) { - return fmt.Errorf("Option %scurve secret key must be a valid 40 character Z85 encoded string", config_path) - } - - return nil + if len(f.CurveServerkey) == 0 { + return fmt.Errorf("Option %scurve server key is required", config_path) + } else if len(f.CurveServerkey) != 40 || !z85Validate(f.CurveServerkey) { + return fmt.Errorf("Option %scurve server key must be a valid 40 character Z85 encoded string", config_path) + } + if len(f.CurvePublickey) == 0 { + return fmt.Errorf("Option %scurve public key is required", config_path) + } else if len(f.CurvePublickey) != 40 || !z85Validate(f.CurvePublickey) { + return fmt.Errorf("Option %scurve public key must be a valid 40 character Z85 encoded string", config_path) + } + if len(f.CurveSecretkey) == 0 { + return fmt.Errorf("Option %scurve secret key is required", config_path) + } else if len(f.CurveSecretkey) != 40 || !z85Validate(f.CurveSecretkey) { + return fmt.Errorf("Option %scurve secret key must be a valid 40 character Z85 encoded string", config_path) + } + + return nil } func (t *TransportZmq) configureSocket() (err error) { - if t.config.transport == "zmq" { - // Configure CurveMQ security - if err = t.dealer.SetCurveServerkey(t.config.CurveServerkey); err != nil { - return fmt.Errorf("Failed to set ZMQ curve server key: %s", err) - } - if err = t.dealer.SetCurvePublickey(t.config.CurvePublickey); err != nil { - return fmt.Errorf("Failed to set ZMQ curve public key: %s", err) - } - if err = t.dealer.SetCurveSecretkey(t.config.CurveSecretkey); err != nil { - return fmt.Errorf("Failed to set ZMQ curve secret key: %s", err) - } - } - return + if t.config.transport == "zmq" { + // Configure CurveMQ security + if err = t.dealer.SetCurveServerkey(t.config.CurveServerkey); err != nil { + return fmt.Errorf("Failed to set ZMQ curve server key: %s", err) + } + if err = t.dealer.SetCurvePublickey(t.config.CurvePublickey); err != nil { + return fmt.Errorf("Failed to set ZMQ curve public key: %s", err) + } + if err = t.dealer.SetCurveSecretkey(t.config.CurveSecretkey); err != nil { + return fmt.Errorf("Failed to set ZMQ curve secret key: %s", err) + } + } + return } // Process ZMQ 4.0.x monitor messages // http://api.zeromq.org/4-0:zmq-socket-monitor func (t *TransportZmq) processMonitorIn() (ok bool) { - for { - // Bring in the messages - RetryRecv: - data, err := t.monitor.Recv(zmq.DONTWAIT) - if err != nil { - switch err { - case syscall.EINTR: - // Try again - goto RetryRecv - case syscall.EAGAIN: - // No more messages - ok = true - return - } - - // Failure - t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.Recv failure %s", err) - return - } - - switch t.event.part { - case Monitor_Part_Header: - t.event.event = zmq.Event(binary.LittleEndian.Uint16(data[0:2])) - t.event.val = int32(binary.LittleEndian.Uint32(data[2:6])) - t.event.data = "" - case Monitor_Part_Data: - t.event.data = string(data) - t.event.Log() - default: - log.Debug("Extraneous data in monitor message. Silently discarding.") - continue - } - - more, err := t.monitor.RcvMore() - if err != nil { - // Failure - t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.RcvMore failure %s", err) - return - } - - if !more { - if t.event.part < Monitor_Part_Data { - t.event.Log() - log.Debug("Unexpected end of monitor message. Skipping.") - } - - t.event.part = Monitor_Part_Header - continue - } - - if t.event.part <= Monitor_Part_Data { - t.event.part++ - } - } + for { + // Bring in the messages + RetryRecv: + data, err := t.monitor.Recv(zmq.DONTWAIT) + if err != nil { + switch err { + case syscall.EINTR: + // Try again + goto RetryRecv + case syscall.EAGAIN: + // No more messages + ok = true + return + } + + // Failure + t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.Recv failure %s", err) + return + } + + switch t.event.part { + case Monitor_Part_Header: + t.event.event = zmq.Event(binary.LittleEndian.Uint16(data[0:2])) + t.event.val = int32(binary.LittleEndian.Uint32(data[2:6])) + t.event.data = "" + case Monitor_Part_Data: + t.event.data = string(data) + t.event.Log() + default: + log.Debug("Extraneous data in monitor message. Silently discarding.") + continue + } + + more, err := t.monitor.RcvMore() + if err != nil { + // Failure + t.recv_chan <- fmt.Errorf("Monitor zmq.Socket.RcvMore failure %s", err) + return + } + + if !more { + if t.event.part < Monitor_Part_Data { + t.event.Log() + log.Debug("Unexpected end of monitor message. Skipping.") + } + + t.event.part = Monitor_Part_Header + continue + } + + if t.event.part <= Monitor_Part_Data { + t.event.part++ + } + } } func z85Validate(z85 string) bool { - var decoded []C.uint8_t + var decoded []C.uint8_t - if len(z85)%5 != 0 { - return false - } else { - // Avoid literal floats - decoded = make([]C.uint8_t, 8*len(z85)/10) - } + if len(z85)%5 != 0 { + return false + } else { + // Avoid literal floats + decoded = make([]C.uint8_t, 8*len(z85)/10) + } - // Grab a CString of the z85 we need to decode - c_z85 := C.CString(z85) - defer C.free(unsafe.Pointer(c_z85)) + // Grab a CString of the z85 we need to decode + c_z85 := C.CString(z85) + defer C.free(unsafe.Pointer(c_z85)) - // Because gozmq does not yet expose this for us, we have to expose it ourselves - if ret := C.zmq_z85_decode(&decoded[0], c_z85); ret == nil { - return false - } + // Because gozmq does not yet expose this for us, we have to expose it ourselves + if ret := C.zmq_z85_decode(&decoded[0], c_z85); ret == nil { + return false + } - return true + return true } // Register the transport func init() { - core.RegisterTransport("zmq", NewZmqTransportFactory) + core.RegisterTransport("zmq", NewZmqTransportFactory) } diff --git a/src/lc-tlscert/lc-tlscert.go b/src/lc-tlscert/lc-tlscert.go index 1ab52375..64da1659 100644 --- a/src/lc-tlscert/lc-tlscert.go +++ b/src/lc-tlscert/lc-tlscert.go @@ -22,183 +22,183 @@ package main import ( - "bufio" - "crypto/rand" - "crypto/rsa" - "crypto/x509" - "crypto/x509/pkix" - "encoding/pem" - "fmt" - "math/big" - "net" - "os" - "strconv" - "time" + "bufio" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "math/big" + "net" + "os" + "strconv" + "time" ) var input *bufio.Reader func init() { - input = bufio.NewReader(os.Stdin) + input = bufio.NewReader(os.Stdin) } func readString(prompt string) string { - fmt.Printf("%s: ", prompt) - - var line []byte - for { - data, prefix, _ := input.ReadLine() - line = append(line, data...) - if !prefix { - break - } - } - - return string(line) + fmt.Printf("%s: ", prompt) + + var line []byte + for { + data, prefix, _ := input.ReadLine() + line = append(line, data...) + if !prefix { + break + } + } + + return string(line) } func readNumber(prompt string) (num int64) { - var err error - for { - if num, err = strconv.ParseInt(readString(prompt), 0, 64); err != nil { - fmt.Println("Please enter a valid numerical value") - continue - } - break - } - return + var err error + for { + if num, err = strconv.ParseInt(readString(prompt), 0, 64); err != nil { + fmt.Println("Please enter a valid numerical value") + continue + } + break + } + return } func anyKey() { - input.ReadRune() + input.ReadRune() } func main() { - var err error - - template := x509.Certificate{ - Subject: pkix.Name{ - Organization: []string{"Log Courier"}, - }, - NotBefore: time.Now(), - - KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, - ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, - BasicConstraintsValid: true, - - IsCA: true, - } - - fmt.Println("Specify the Common Name for the certificate. The common name") - fmt.Println("can be anything, but is usually set to the server's primary") - fmt.Println("DNS name. Even if you plan to connect via IP address you") - fmt.Println("should specify the DNS name here.") - fmt.Println() - - template.Subject.CommonName = readString("Common name") - fmt.Println() - - fmt.Println("The next step is to add any additional DNS names and IP") - fmt.Println("addresses that clients may use to connect to the server. If") - fmt.Println("you plan to connect to the server via IP address and not DNS") - fmt.Println("then you must specify those IP addresses here.") - fmt.Println("When you are finished, just press enter.") - fmt.Println() - - var cnt = 0 - var val string - for { - cnt++ - - if val = readString(fmt.Sprintf("DNS or IP address %d", cnt)); val == "" { - break - } - - if ip := net.ParseIP(val); ip != nil { - template.IPAddresses = append(template.IPAddresses, ip) - } else { - template.DNSNames = append(template.DNSNames, val) - } - } - - fmt.Println() - - fmt.Println("How long should the certificate be valid for? A year (365") - fmt.Println("days) is usual but requires the certificate to be regenerated") - fmt.Println("within a year or the certificate will cease working.") - fmt.Println() - - template.NotAfter = template.NotBefore.Add(time.Duration(readNumber("Number of days")) * time.Hour * 24) - - fmt.Println("Common name:", template.Subject.CommonName) - fmt.Println("DNS SANs:") - if len(template.DNSNames) == 0 { - fmt.Println(" None") - } else { - for _, e := range template.DNSNames { - fmt.Println(" ", e) - } - } - fmt.Println("IP SANs:") - if len(template.IPAddresses) == 0 { - fmt.Println(" None") - } else { - for _, e := range template.IPAddresses { - fmt.Println(" ", e) - } - } - fmt.Println() - - fmt.Println("The certificate can now be generated") - fmt.Println("Press any key to begin generating the self-signed certificate.") - anyKey() - - priv, err := rsa.GenerateKey(rand.Reader, 2048) - if err != nil { - fmt.Println("Failed to generate private key:", err) - os.Exit(1) - } - - serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) - template.SerialNumber, err = rand.Int(rand.Reader, serialNumberLimit) - if err != nil { - fmt.Println("Failed to generate serial number:", err) - os.Exit(1) - } - - derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) - if err != nil { - fmt.Println("Failed to create certificate:", err) - os.Exit(1) - } - - certOut, err := os.Create("selfsigned.crt") - if err != nil { - fmt.Println("Failed to open selfsigned.pem for writing:", err) - os.Exit(1) - } - pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes}) - certOut.Close() - - keyOut, err := os.OpenFile("selfsigned.key", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) - if err != nil { - fmt.Println("failed to open selfsigned.key for writing:", err) - os.Exit(1) - } - pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)}) - keyOut.Close() - - fmt.Println("Successfully generated certificate") - fmt.Println(" Certificate: selfsigned.crt") - fmt.Println(" Private Key: selfsigned.key") - fmt.Println() - fmt.Println("Copy and paste the following into your Log Courier") - fmt.Println("configuration, adjusting paths as necessary:") - fmt.Println(" \"transport\": \"tls\",") - fmt.Println(" \"ssl ca\": \"path/to/selfsigned.crt\",") - fmt.Println() - fmt.Println("Copy and paste the following into your LogStash configuration, ") - fmt.Println("adjusting paths as necessary:") - fmt.Println(" ssl_certificate => \"path/to/selfsigned.crt\",") - fmt.Println(" ssl_key => \"path/to/selfsigned.key\",") + var err error + + template := x509.Certificate{ + Subject: pkix.Name{ + Organization: []string{"Log Courier"}, + }, + NotBefore: time.Now(), + + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + BasicConstraintsValid: true, + + IsCA: true, + } + + fmt.Println("Specify the Common Name for the certificate. The common name") + fmt.Println("can be anything, but is usually set to the server's primary") + fmt.Println("DNS name. Even if you plan to connect via IP address you") + fmt.Println("should specify the DNS name here.") + fmt.Println() + + template.Subject.CommonName = readString("Common name") + fmt.Println() + + fmt.Println("The next step is to add any additional DNS names and IP") + fmt.Println("addresses that clients may use to connect to the server. If") + fmt.Println("you plan to connect to the server via IP address and not DNS") + fmt.Println("then you must specify those IP addresses here.") + fmt.Println("When you are finished, just press enter.") + fmt.Println() + + var cnt = 0 + var val string + for { + cnt++ + + if val = readString(fmt.Sprintf("DNS or IP address %d", cnt)); val == "" { + break + } + + if ip := net.ParseIP(val); ip != nil { + template.IPAddresses = append(template.IPAddresses, ip) + } else { + template.DNSNames = append(template.DNSNames, val) + } + } + + fmt.Println() + + fmt.Println("How long should the certificate be valid for? A year (365") + fmt.Println("days) is usual but requires the certificate to be regenerated") + fmt.Println("within a year or the certificate will cease working.") + fmt.Println() + + template.NotAfter = template.NotBefore.Add(time.Duration(readNumber("Number of days")) * time.Hour * 24) + + fmt.Println("Common name:", template.Subject.CommonName) + fmt.Println("DNS SANs:") + if len(template.DNSNames) == 0 { + fmt.Println(" None") + } else { + for _, e := range template.DNSNames { + fmt.Println(" ", e) + } + } + fmt.Println("IP SANs:") + if len(template.IPAddresses) == 0 { + fmt.Println(" None") + } else { + for _, e := range template.IPAddresses { + fmt.Println(" ", e) + } + } + fmt.Println() + + fmt.Println("The certificate can now be generated") + fmt.Println("Press any key to begin generating the self-signed certificate.") + anyKey() + + priv, err := rsa.GenerateKey(rand.Reader, 2048) + if err != nil { + fmt.Println("Failed to generate private key:", err) + os.Exit(1) + } + + serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128) + template.SerialNumber, err = rand.Int(rand.Reader, serialNumberLimit) + if err != nil { + fmt.Println("Failed to generate serial number:", err) + os.Exit(1) + } + + derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv) + if err != nil { + fmt.Println("Failed to create certificate:", err) + os.Exit(1) + } + + certOut, err := os.Create("selfsigned.crt") + if err != nil { + fmt.Println("Failed to open selfsigned.pem for writing:", err) + os.Exit(1) + } + pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes}) + certOut.Close() + + keyOut, err := os.OpenFile("selfsigned.key", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) + if err != nil { + fmt.Println("failed to open selfsigned.key for writing:", err) + os.Exit(1) + } + pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)}) + keyOut.Close() + + fmt.Println("Successfully generated certificate") + fmt.Println(" Certificate: selfsigned.crt") + fmt.Println(" Private Key: selfsigned.key") + fmt.Println() + fmt.Println("Copy and paste the following into your Log Courier") + fmt.Println("configuration, adjusting paths as necessary:") + fmt.Println(" \"transport\": \"tls\",") + fmt.Println(" \"ssl ca\": \"path/to/selfsigned.crt\",") + fmt.Println() + fmt.Println("Copy and paste the following into your LogStash configuration, ") + fmt.Println("adjusting paths as necessary:") + fmt.Println(" ssl_certificate => \"path/to/selfsigned.crt\",") + fmt.Println(" ssl_key => \"path/to/selfsigned.key\",") } diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index c8f12a62..b9aa627c 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -20,309 +20,308 @@ package main import ( - "flag" - "fmt" - "github.com/op/go-logging" - "github.com/driskell/log-courier/src/lc-lib/admin" - "github.com/driskell/log-courier/src/lc-lib/core" - "github.com/driskell/log-courier/src/lc-lib/harvester" - "github.com/driskell/log-courier/src/lc-lib/prospector" - "github.com/driskell/log-courier/src/lc-lib/spooler" - "github.com/driskell/log-courier/src/lc-lib/publisher" - "github.com/driskell/log-courier/src/lc-lib/registrar" - stdlog "log" - "os" - "runtime/pprof" - "time" + "flag" + "fmt" + "github.com/driskell/log-courier/src/lc-lib/admin" + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/harvester" + "github.com/driskell/log-courier/src/lc-lib/prospector" + "github.com/driskell/log-courier/src/lc-lib/publisher" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "github.com/driskell/log-courier/src/lc-lib/spooler" + "github.com/op/go-logging" + stdlog "log" + "os" + "runtime/pprof" + "time" ) import _ "github.com/driskell/log-courier/src/lc-lib/codecs" import _ "github.com/driskell/log-courier/src/lc-lib/transports" func main() { - logcourier := NewLogCourier() - logcourier.Run() + logcourier := NewLogCourier() + logcourier.Run() } type LogCourier struct { - pipeline *core.Pipeline - config *core.Config - shutdown_chan chan os.Signal - reload_chan chan os.Signal - config_file string - stdin bool - from_beginning bool - harvester *harvester.Harvester - log_file *DefaultLogBackend - last_snapshot time.Time - snapshot *core.Snapshot + pipeline *core.Pipeline + config *core.Config + shutdown_chan chan os.Signal + reload_chan chan os.Signal + config_file string + stdin bool + from_beginning bool + harvester *harvester.Harvester + log_file *DefaultLogBackend + last_snapshot time.Time + snapshot *core.Snapshot } func NewLogCourier() *LogCourier { - ret := &LogCourier{ - pipeline: core.NewPipeline(), - } - return ret + ret := &LogCourier{ + pipeline: core.NewPipeline(), + } + return ret } func (lc *LogCourier) Run() { - var admin_listener *admin.Listener - var on_command <-chan string - var harvester_wait <-chan *harvester.HarvesterFinish - var registrar_imp *registrar.Registrar + var admin_listener *admin.Listener + var on_command <-chan string + var harvester_wait <-chan *harvester.HarvesterFinish + var registrar_imp *registrar.Registrar - lc.startUp() + lc.startUp() - log.Info("Log Courier version %s pipeline starting", core.Log_Courier_Version) + log.Info("Log Courier version %s pipeline starting", core.Log_Courier_Version) - // If reading from stdin, skip admin, and set up a null registrar - if lc.stdin { - registrar_imp = nil - } else { - if lc.config.General.AdminEnabled { - var err error + // If reading from stdin, skip admin, and set up a null registrar + if lc.stdin { + registrar_imp = nil + } else { + if lc.config.General.AdminEnabled { + var err error - admin_listener, err = admin.NewListener(lc.pipeline, &lc.config.General) - if err != nil { - log.Fatalf("Failed to initialise: %s", err) - } + admin_listener, err = admin.NewListener(lc.pipeline, &lc.config.General) + if err != nil { + log.Fatalf("Failed to initialise: %s", err) + } - on_command = admin_listener.OnCommand() - } + on_command = admin_listener.OnCommand() + } - registrar_imp = registrar.NewRegistrar(lc.pipeline, lc.config.General.PersistDir) - } + registrar_imp = registrar.NewRegistrar(lc.pipeline, lc.config.General.PersistDir) + } - publisher, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar_imp) - if err != nil { - log.Fatalf("Failed to initialise: %s", err) - } + publisher, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar_imp) + if err != nil { + log.Fatalf("Failed to initialise: %s", err) + } - spooler_imp := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher) + spooler_imp := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher) - // If reading from stdin, don't start prospector, directly start a harvester - if lc.stdin { - lc.harvester = harvester.NewHarvester(nil, lc.config, &lc.config.Stdin, 0) - lc.harvester.Start(spooler_imp.Connect()) - harvester_wait = lc.harvester.OnFinish() - } else { - if _, err := prospector.NewProspector(lc.pipeline, lc.config, lc.from_beginning, registrar_imp, spooler_imp); err != nil { - log.Fatalf("Failed to initialise: %s", err) - } - } + // If reading from stdin, don't start prospector, directly start a harvester + if lc.stdin { + lc.harvester = harvester.NewHarvester(nil, lc.config, &lc.config.Stdin, 0) + lc.harvester.Start(spooler_imp.Connect()) + harvester_wait = lc.harvester.OnFinish() + } else { + if _, err := prospector.NewProspector(lc.pipeline, lc.config, lc.from_beginning, registrar_imp, spooler_imp); err != nil { + log.Fatalf("Failed to initialise: %s", err) + } + } - // Start the pipeline - lc.pipeline.Start() + // Start the pipeline + lc.pipeline.Start() - log.Notice("Pipeline ready") + log.Notice("Pipeline ready") - lc.shutdown_chan = make(chan os.Signal, 1) - lc.reload_chan = make(chan os.Signal, 1) - lc.registerSignals() + lc.shutdown_chan = make(chan os.Signal, 1) + lc.reload_chan = make(chan os.Signal, 1) + lc.registerSignals() SignalLoop: - for { - select { - case <-lc.shutdown_chan: - lc.cleanShutdown() - break SignalLoop - case <-lc.reload_chan: - lc.reloadConfig() - case command := <-on_command: - admin_listener.Respond(lc.processCommand(command)) - case finished := <-harvester_wait: - if finished.Error != nil { - log.Notice("An error occurred reading from stdin at offset %d: %s", finished.Last_Offset, finished.Error) - } else { - log.Notice("Finished reading from stdin at offset %d", finished.Last_Offset) - } - lc.harvester = nil - lc.cleanShutdown() - break SignalLoop - } - } - - log.Notice("Exiting") - - if lc.log_file != nil { - lc.log_file.Close() - } + for { + select { + case <-lc.shutdown_chan: + lc.cleanShutdown() + break SignalLoop + case <-lc.reload_chan: + lc.reloadConfig() + case command := <-on_command: + admin_listener.Respond(lc.processCommand(command)) + case finished := <-harvester_wait: + if finished.Error != nil { + log.Notice("An error occurred reading from stdin at offset %d: %s", finished.Last_Offset, finished.Error) + } else { + log.Notice("Finished reading from stdin at offset %d", finished.Last_Offset) + } + lc.harvester = nil + lc.cleanShutdown() + break SignalLoop + } + } + + log.Notice("Exiting") + + if lc.log_file != nil { + lc.log_file.Close() + } } func (lc *LogCourier) startUp() { - var version bool - var config_test bool - var list_supported bool - var cpu_profile string - - flag.BoolVar(&version, "version", false, "show version information") - flag.BoolVar(&config_test, "config-test", false, "Test the configuration specified by -config and exit") - flag.BoolVar(&list_supported, "list-supported", false, "List supported transports and codecs") - flag.StringVar(&cpu_profile, "cpuprofile", "", "write cpu profile to file") - - flag.StringVar(&lc.config_file, "config", "", "The config file to load") - flag.BoolVar(&lc.stdin, "stdin", false, "Read from stdin instead of files listed in the config file") - flag.BoolVar(&lc.from_beginning, "from-beginning", false, "On first run, read new files from the beginning instead of the end") - - flag.Parse() - - if version { - fmt.Printf("Log Courier version %s\n", core.Log_Courier_Version) - os.Exit(0) - } - - if list_supported { - fmt.Printf("Available transports:\n") - for _, transport := range core.AvailableTransports() { - fmt.Printf(" %s\n", transport) - } - - fmt.Printf("Available codecs:\n") - for _, codec := range core.AvailableCodecs() { - fmt.Printf(" %s\n", codec) - } - os.Exit(0) - } - - if lc.config_file == "" { - fmt.Fprintf(os.Stderr, "Please specify a configuration file with -config.\n\n") - flag.PrintDefaults() - os.Exit(1) - } - - err := lc.loadConfig() - - if config_test { - if err == nil { - fmt.Printf("Configuration OK\n") - os.Exit(0) - } - fmt.Printf("Configuration test failed: %s\n", err) - os.Exit(1) - } - - if err != nil { - fmt.Printf("Configuration error: %s\n", err) - os.Exit(1) - } - - if err = lc.configureLogging(); err != nil { - fmt.Printf("Failed to initialise logging: %s", err) - os.Exit(1) - } - - if cpu_profile != "" { - log.Notice("Starting CPU profiler") - f, err := os.Create(cpu_profile) - if err != nil { - log.Fatal(err) - } - pprof.StartCPUProfile(f) - go func() { - time.Sleep(60 * time.Second) - pprof.StopCPUProfile() - log.Panic("CPU profile completed") - }() - } + var version bool + var config_test bool + var list_supported bool + var cpu_profile string + + flag.BoolVar(&version, "version", false, "show version information") + flag.BoolVar(&config_test, "config-test", false, "Test the configuration specified by -config and exit") + flag.BoolVar(&list_supported, "list-supported", false, "List supported transports and codecs") + flag.StringVar(&cpu_profile, "cpuprofile", "", "write cpu profile to file") + + flag.StringVar(&lc.config_file, "config", "", "The config file to load") + flag.BoolVar(&lc.stdin, "stdin", false, "Read from stdin instead of files listed in the config file") + flag.BoolVar(&lc.from_beginning, "from-beginning", false, "On first run, read new files from the beginning instead of the end") + + flag.Parse() + + if version { + fmt.Printf("Log Courier version %s\n", core.Log_Courier_Version) + os.Exit(0) + } + + if list_supported { + fmt.Printf("Available transports:\n") + for _, transport := range core.AvailableTransports() { + fmt.Printf(" %s\n", transport) + } + + fmt.Printf("Available codecs:\n") + for _, codec := range core.AvailableCodecs() { + fmt.Printf(" %s\n", codec) + } + os.Exit(0) + } + + if lc.config_file == "" { + fmt.Fprintf(os.Stderr, "Please specify a configuration file with -config.\n\n") + flag.PrintDefaults() + os.Exit(1) + } + + err := lc.loadConfig() + + if config_test { + if err == nil { + fmt.Printf("Configuration OK\n") + os.Exit(0) + } + fmt.Printf("Configuration test failed: %s\n", err) + os.Exit(1) + } + + if err != nil { + fmt.Printf("Configuration error: %s\n", err) + os.Exit(1) + } + + if err = lc.configureLogging(); err != nil { + fmt.Printf("Failed to initialise logging: %s", err) + os.Exit(1) + } + + if cpu_profile != "" { + log.Notice("Starting CPU profiler") + f, err := os.Create(cpu_profile) + if err != nil { + log.Fatal(err) + } + pprof.StartCPUProfile(f) + go func() { + time.Sleep(60 * time.Second) + pprof.StopCPUProfile() + log.Panic("CPU profile completed") + }() + } } func (lc *LogCourier) configureLogging() (err error) { - backends := make([]logging.Backend, 0, 1) + backends := make([]logging.Backend, 0, 1) - // First, the stdout backend - if lc.config.General.LogStdout { - backends = append(backends, logging.NewLogBackend(os.Stdout, "", stdlog.LstdFlags|stdlog.Lmicroseconds)) - } + // First, the stdout backend + if lc.config.General.LogStdout { + backends = append(backends, logging.NewLogBackend(os.Stdout, "", stdlog.LstdFlags|stdlog.Lmicroseconds)) + } - // Log file? - if lc.config.General.LogFile != "" { - lc.log_file, err = NewDefaultLogBackend(lc.config.General.LogFile, "", stdlog.LstdFlags|stdlog.Lmicroseconds) - if err != nil { - return - } + // Log file? + if lc.config.General.LogFile != "" { + lc.log_file, err = NewDefaultLogBackend(lc.config.General.LogFile, "", stdlog.LstdFlags|stdlog.Lmicroseconds) + if err != nil { + return + } - backends = append(backends, lc.log_file) - } + backends = append(backends, lc.log_file) + } - if err = lc.configureLoggingPlatform(&backends); err != nil { - return - } + if err = lc.configureLoggingPlatform(&backends); err != nil { + return + } - // Set backends BEFORE log level (or we reset log level) - logging.SetBackend(backends...) + // Set backends BEFORE log level (or we reset log level) + logging.SetBackend(backends...) - // Set the logging level - logging.SetLevel(lc.config.General.LogLevel, "") + // Set the logging level + logging.SetLevel(lc.config.General.LogLevel, "") - return nil + return nil } func (lc *LogCourier) loadConfig() error { - lc.config = core.NewConfig() - if err := lc.config.Load(lc.config_file); err != nil { - return err - } - - if lc.stdin { - // TODO: Where to find stdin config for codec and fields? - } else if len(lc.config.Files) == 0 { - log.Warning("No file groups were found in the configuration.") - } - - return nil + lc.config = core.NewConfig() + if err := lc.config.Load(lc.config_file); err != nil { + return err + } + + if lc.stdin { + // TODO: Where to find stdin config for codec and fields? + } else if len(lc.config.Files) == 0 { + log.Warning("No file groups were found in the configuration.") + } + + return nil } func (lc *LogCourier) reloadConfig() error { - if err := lc.loadConfig(); err != nil { - return err - } + if err := lc.loadConfig(); err != nil { + return err + } - log.Notice("Configuration reload successful") + log.Notice("Configuration reload successful") - // Update the log level - logging.SetLevel(lc.config.General.LogLevel, "") + // Update the log level + logging.SetLevel(lc.config.General.LogLevel, "") - // Reopen the log file if we specified one - if lc.log_file != nil { - lc.log_file.Reopen() - log.Notice("Log file reopened") - } + // Reopen the log file if we specified one + if lc.log_file != nil { + lc.log_file.Reopen() + log.Notice("Log file reopened") + } - // Pass the new config to the pipeline workers - lc.pipeline.SendConfig(lc.config) + // Pass the new config to the pipeline workers + lc.pipeline.SendConfig(lc.config) - return nil + return nil } func (lc *LogCourier) processCommand(command string) *admin.Response { - switch command { - case "RELD": - if err := lc.reloadConfig(); err != nil { - return &admin.Response{&admin.ErrorResponse{Message: fmt.Sprintf("Configuration error, reload unsuccessful: %s", err.Error())}} - } - return &admin.Response{&admin.ReloadResponse{}} - case "SNAP": - if lc.snapshot == nil || time.Since(lc.last_snapshot) >= time.Second { - lc.snapshot = lc.pipeline.Snapshot() - lc.snapshot.Sort() - lc.last_snapshot = time.Now() - } - return &admin.Response{lc.snapshot} - } - - - return &admin.Response{&admin.ErrorResponse{Message: "Unknown command"}} + switch command { + case "RELD": + if err := lc.reloadConfig(); err != nil { + return &admin.Response{&admin.ErrorResponse{Message: fmt.Sprintf("Configuration error, reload unsuccessful: %s", err.Error())}} + } + return &admin.Response{&admin.ReloadResponse{}} + case "SNAP": + if lc.snapshot == nil || time.Since(lc.last_snapshot) >= time.Second { + lc.snapshot = lc.pipeline.Snapshot() + lc.snapshot.Sort() + lc.last_snapshot = time.Now() + } + return &admin.Response{lc.snapshot} + } + + return &admin.Response{&admin.ErrorResponse{Message: "Unknown command"}} } func (lc *LogCourier) cleanShutdown() { - log.Notice("Initiating shutdown") + log.Notice("Initiating shutdown") - if lc.harvester != nil { - lc.harvester.Stop() - finished := <-lc.harvester.OnFinish() - log.Notice("Aborted reading from stdin at offset %d", finished.Last_Offset) - } + if lc.harvester != nil { + lc.harvester.Stop() + finished := <-lc.harvester.OnFinish() + log.Notice("Aborted reading from stdin at offset %d", finished.Last_Offset) + } - lc.pipeline.Shutdown() - lc.pipeline.Wait() + lc.pipeline.Shutdown() + lc.pipeline.Wait() } diff --git a/src/log-courier/log-courier_nonwindows.go b/src/log-courier/log-courier_nonwindows.go index 7bc5ab01..99da751f 100644 --- a/src/log-courier/log-courier_nonwindows.go +++ b/src/log-courier/log-courier_nonwindows.go @@ -19,46 +19,46 @@ package main import ( - "fmt" - "github.com/op/go-logging" - "os" - "os/signal" - "syscall" - "unsafe" + "fmt" + "github.com/op/go-logging" + "os" + "os/signal" + "syscall" + "unsafe" ) func (lc *LogCourier) registerSignals() { - // *nix systems support SIGTERM so handle shutdown on that too - signal.Notify(lc.shutdown_chan, os.Interrupt, syscall.SIGTERM) + // *nix systems support SIGTERM so handle shutdown on that too + signal.Notify(lc.shutdown_chan, os.Interrupt, syscall.SIGTERM) - // *nix has SIGHUP for reload - signal.Notify(lc.reload_chan, syscall.SIGHUP) + // *nix has SIGHUP for reload + signal.Notify(lc.reload_chan, syscall.SIGHUP) } func (lc *LogCourier) configureLoggingPlatform(backends *[]logging.Backend) error { - // Make it color if it's a TTY - // TODO: This could be prone to problems when updating logging in future - if lc.isatty(os.Stdout) && lc.config.General.LogStdout { - (*backends)[0].(*logging.LogBackend).Color = true - } + // Make it color if it's a TTY + // TODO: This could be prone to problems when updating logging in future + if lc.isatty(os.Stdout) && lc.config.General.LogStdout { + (*backends)[0].(*logging.LogBackend).Color = true + } - if lc.config.General.LogSyslog { - syslog_backend, err := logging.NewSyslogBackend("log-courier") - if err != nil { - return fmt.Errorf("Failed to open syslog: %s", err) - } - new_backends := append(*backends, syslog_backend) - *backends = new_backends - } + if lc.config.General.LogSyslog { + syslog_backend, err := logging.NewSyslogBackend("log-courier") + if err != nil { + return fmt.Errorf("Failed to open syslog: %s", err) + } + new_backends := append(*backends, syslog_backend) + *backends = new_backends + } - return nil + return nil } func (lc *LogCourier) isatty(f *os.File) bool { - var pgrp int64 - // Most real isatty implementations use TIOCGETA - // However, TIOCGPRGP is easier than TIOCGETA as it only requires an int and not a termios struct - // There is a possibility it may not have the exact same effect - but seems fine to me - _, _, err := syscall.Syscall(syscall.SYS_IOCTL, f.Fd(), syscall.TIOCGPGRP, uintptr(unsafe.Pointer(&pgrp))) - return err == 0 + var pgrp int64 + // Most real isatty implementations use TIOCGETA + // However, TIOCGPRGP is easier than TIOCGETA as it only requires an int and not a termios struct + // There is a possibility it may not have the exact same effect - but seems fine to me + _, _, err := syscall.Syscall(syscall.SYS_IOCTL, f.Fd(), syscall.TIOCGPGRP, uintptr(unsafe.Pointer(&pgrp))) + return err == 0 } diff --git a/src/log-courier/log-courier_windows.go b/src/log-courier/log-courier_windows.go index 570e2c31..58f40622 100644 --- a/src/log-courier/log-courier_windows.go +++ b/src/log-courier/log-courier_windows.go @@ -19,18 +19,18 @@ package main import ( - "github.com/op/go-logging" - "os" - "os/signal" + "github.com/op/go-logging" + "os" + "os/signal" ) func (lc *LogCourier) registerSignals() { - // Windows onyl supports os.Interrupt - signal.Notify(lc.shutdown_chan, os.Interrupt) + // Windows onyl supports os.Interrupt + signal.Notify(lc.shutdown_chan, os.Interrupt) - // No reload signal for Windows - implementation will have to wait + // No reload signal for Windows - implementation will have to wait } func (lc *LogCourier) configureLoggingPlatform(backends *[]logging.Backend) error { - return nil + return nil } diff --git a/src/log-courier/logging.go b/src/log-courier/logging.go index b7c48060..9c4eec91 100644 --- a/src/log-courier/logging.go +++ b/src/log-courier/logging.go @@ -12,76 +12,76 @@ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. -*/ + */ package main import ( - "github.com/op/go-logging" - golog "log" - "io/ioutil" - "os" + "github.com/op/go-logging" + "io/ioutil" + golog "log" + "os" ) var log *logging.Logger func init() { - log = logging.MustGetLogger("log-courier") + log = logging.MustGetLogger("log-courier") } type DefaultLogBackend struct { - file *os.File - path string + file *os.File + path string } func NewDefaultLogBackend(path string, prefix string, flag int) (*DefaultLogBackend, error) { - ret := &DefaultLogBackend{ - path: path, - } + ret := &DefaultLogBackend{ + path: path, + } - golog.SetPrefix(prefix) - golog.SetFlags(flag) + golog.SetPrefix(prefix) + golog.SetFlags(flag) - err := ret.Reopen() - if err != nil { - return nil, err - } + err := ret.Reopen() + if err != nil { + return nil, err + } - return ret, nil + return ret, nil } func (f *DefaultLogBackend) Log(level logging.Level, calldepth int, rec *logging.Record) error { - golog.Print(rec.Formatted(calldepth+1)) - return nil + golog.Print(rec.Formatted(calldepth + 1)) + return nil } func (f *DefaultLogBackend) Reopen() (err error) { - var new_file *os.File + var new_file *os.File - new_file, err = os.OpenFile(f.path, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0640) - if err != nil { - return - } + new_file, err = os.OpenFile(f.path, os.O_CREATE|os.O_RDWR|os.O_APPEND, 0640) + if err != nil { + return + } - // Switch to new output before closing - golog.SetOutput(new_file) + // Switch to new output before closing + golog.SetOutput(new_file) - if f.file != nil { - f.file.Close() - } + if f.file != nil { + f.file.Close() + } - f.file = new_file + f.file = new_file - return nil + return nil } func (f *DefaultLogBackend) Close() { - // Discard logs before closing - golog.SetOutput(ioutil.Discard) + // Discard logs before closing + golog.SetOutput(ioutil.Discard) - if f.file != nil { - f.file.Close() - } + if f.file != nil { + f.file.Close() + } - f.file = nil + f.file = nil } From 7751ced5879f611e1df63799931b69791ca8ad07 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 22:56:28 +0000 Subject: [PATCH 17/75] Additional debug messages for pending payloads --- src/lc-lib/publisher/publisher.go | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/lc-lib/publisher/publisher.go b/src/lc-lib/publisher/publisher.go index 55492f17..1d99a84c 100644 --- a/src/lc-lib/publisher/publisher.go +++ b/src/lc-lib/publisher/publisher.go @@ -310,7 +310,9 @@ PublishLoop: if p.num_payloads >= p.config.MaxPendingPayloads { // Too many pending payloads, disable send temporarily p.can_send = nil - log.Debug("Pending payload limit reached") + log.Debug("Pending payload limit of %d reached", p.config.MaxPendingPayloads) + } else { + log.Debug("%d/%d pending payloads now in transit", p.num_payloads, p.config.MaxPendingPayloads) } // Expect an ACK within network timeout if this is first payload after idle From 61b7f95fbbc70a7916476da6f07252e5a4a41207 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 22:59:28 +0000 Subject: [PATCH 18/75] Fix discard causing exception in logstash input plugin zmq transport (#92) --- docs/ChangeLog.md | 2 ++ lib/log-courier/server_zmq.rb | 7 +++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 0088ff13..ddf71928 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -44,6 +44,8 @@ configuration. (Thanks @mhughes - #88) * A configuration reload will now reopen log files. (#91) * Implement support for SRV record server entries (#85) * Fix Log Courier output plugin (#96) +* Fix Logstash input plugin with zmq transport failing when discarding a message +due to peer_recv_queue being exceeded (#92) ***Security*** diff --git a/lib/log-courier/server_zmq.rb b/lib/log-courier/server_zmq.rb index a0540a20..ac25207d 100644 --- a/lib/log-courier/server_zmq.rb +++ b/lib/log-courier/server_zmq.rb @@ -273,8 +273,11 @@ def deliver(source, data, &block) } end - # Existing thread, throw on the queue, if not enough room drop the message - index['']['client'].push data, 0 + # Existing thread, throw on the queue, if not enough room (timeout) drop the message + begin + index['']['client'].push data, 0 + rescue LogCourier::TimeoutError + end end return end From ed47cbaafa86a1f14571e5878a9ac15dec46a310 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 19 Jan 2015 23:01:04 +0000 Subject: [PATCH 19/75] TODO note - should we emit a warning if peer_recv_queue is exceeded? --- lib/log-courier/server_zmq.rb | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/log-courier/server_zmq.rb b/lib/log-courier/server_zmq.rb index ac25207d..e9ad345c 100644 --- a/lib/log-courier/server_zmq.rb +++ b/lib/log-courier/server_zmq.rb @@ -277,6 +277,7 @@ def deliver(source, data, &block) begin index['']['client'].push data, 0 rescue LogCourier::TimeoutError + # TODO: Log a warning about this? end end return From 9cf268af68763d75543ca5d5d4d02a74aebb7687 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 19:35:51 +0000 Subject: [PATCH 20/75] Fix undefined constant due to undefined Timeout when only the output library is loaded (#98) --- lib/log-courier/client.rb | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index e01e6b43..3b8a55e5 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -25,6 +25,7 @@ class NativeException; end module LogCourier + class TimeoutError < StandardError; end class ShutdownSignal < StandardError; end class ProtocolError < StandardError; end From 9a16d3361fb5f4f90ceb4556364ecea8dcc6e57f Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 19:43:32 +0000 Subject: [PATCH 21/75] Fix a TCP transport race condition in send() error handling which can cause publisher to deadlock (Fixes #100) --- src/lc-lib/transports/tcp.go | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index 11441890..bc5b77b9 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -304,8 +304,9 @@ SendLoop: // Shutdown will have been received by the wrapper break SendLoop } else { - // Pass error back + // Pass the error back and abort t.recv_chan <- err + break SendLoop } } } From 290ea3413abf9ec114468ca56197ddc1f512e23f Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 20:33:44 +0000 Subject: [PATCH 22/75] Remove existing socket file if it already exists (#101) --- src/lc-lib/admin/transport_unix.go | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/lc-lib/admin/transport_unix.go b/src/lc-lib/admin/transport_unix.go index f577e4aa..4c53460d 100644 --- a/src/lc-lib/admin/transport_unix.go +++ b/src/lc-lib/admin/transport_unix.go @@ -49,6 +49,14 @@ func listenUnix(transport, addr string) (NetListener, error) { return nil, fmt.Errorf("The admin bind address specified is not valid: %s", err) } + // Remove previous socket file if it's still there or we'll get address + // already in use error + if _, err = os.Stat(addr); err == nil || !os.IsNotExist(err) { + if err := os.Remove(addr); err != nil && !os.IsNotExist(err) { + return nil, fmt.Errorf("Failed to remove the existing socket file: %s", err) + } + } + listener, err := net.ListenUnix("unix", uaddr) if err != nil { return nil, err From a436cb6ac29c16be4c08329400327573e32235eb Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 20:35:35 +0000 Subject: [PATCH 23/75] Update changelog --- docs/ChangeLog.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index ddf71928..edf03775 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -46,6 +46,10 @@ configuration. (Thanks @mhughes - #88) * Fix Log Courier output plugin (#96) * Fix Logstash input plugin with zmq transport failing when discarding a message due to peer_recv_queue being exceeded (#92) +* Fix a TCP transport race condition that could deadlock publisher on a send() +error (#100) +* Fix "address already in use" startup error when admin is enabled on a unix +socket and the unix socket file already exists during startup (#101) ***Security*** From 47fd948ba422822d92f4fc388ba988b88b9ec60b Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 20:52:27 +0000 Subject: [PATCH 24/75] Fix build failure caused by fix for #101 --- src/lc-lib/admin/transport_unix.go | 1 + 1 file changed, 1 insertion(+) diff --git a/src/lc-lib/admin/transport_unix.go b/src/lc-lib/admin/transport_unix.go index 4c53460d..febd39dd 100644 --- a/src/lc-lib/admin/transport_unix.go +++ b/src/lc-lib/admin/transport_unix.go @@ -21,6 +21,7 @@ package admin import ( "fmt" "net" + "os" ) func init() { From e92159545e0741efc6e5997a31dbb052256b1300 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 21:56:35 +0000 Subject: [PATCH 25/75] Parameter is addresses not hosts --- lib/logstash/outputs/courier.rb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/logstash/outputs/courier.rb b/lib/logstash/outputs/courier.rb index a87b1d2f..51984dc5 100644 --- a/lib/logstash/outputs/courier.rb +++ b/lib/logstash/outputs/courier.rb @@ -25,7 +25,7 @@ class Courier < LogStash::Outputs::Base milestone 1 # The list of addresses Log Courier should send to - config :hosts, :validate => :array, :required => true + config :addresses, :validate => :array, :required => true # The port to connect to config :port, :validate => :number, :required => true @@ -55,7 +55,7 @@ def register options = { logger: @logger, - addresses: @hosts, + addresses: @addresses, port: @port, ssl_ca: @ssl_ca, ssl_certificate: @ssl_certificate, From 1527f1a70e629a6e8fc632cdf54f5b5ae9d2c2cf Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 22:04:50 +0000 Subject: [PATCH 26/75] Implement tcp transport on output plugin --- lib/log-courier/client.rb | 12 ++- .../{client_tls.rb => client_tcp.rb} | 82 +++++++++++-------- lib/log-courier/server.rb | 2 +- 3 files changed, 58 insertions(+), 38 deletions(-) rename lib/log-courier/{client_tls.rb => client_tcp.rb} (71%) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index 3b8a55e5..4472788c 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -93,15 +93,21 @@ class Client def initialize(options = {}) @options = { logger: nil, + transport: 'tls', spool_size: 1024, - idle_timeout: 5 + idle_timeout: 5, }.merge!(options) @logger = @options[:logger] @logger['plugin'] = 'output/courier' unless @logger.nil? - require 'log-courier/client_tls' - @client = ClientTls.new(@options) + case @options[:transport] + when 'tcp', 'tls' + require 'log-courier/client_tcp' + @server = ClientTcp.new(@options) + else + fail 'output/courier: \'transport\' must be tcp or tls' + end @event_queue = EventQueue.new @options[:spool_size] @pending_payloads = {} diff --git a/lib/log-courier/client_tls.rb b/lib/log-courier/client_tcp.rb similarity index 71% rename from lib/log-courier/client_tls.rb rename to lib/log-courier/client_tcp.rb index 6ebe68ee..793d92b3 100644 --- a/lib/log-courier/client_tls.rb +++ b/lib/log-courier/client_tcp.rb @@ -23,16 +23,17 @@ module LogCourier # TLS transport implementation - class ClientTls + class ClientTcp def initialize(options = {}) @options = { logger: nil, + transport: 'tls', port: nil, addresses: [], ssl_ca: nil, ssl_certificate: nil, ssl_key: nil, - ssl_key_passphrase: nil + ssl_key_passphrase: nil, }.merge!(options) @logger = @options[:logger] @@ -43,12 +44,13 @@ def initialize(options = {}) fail 'output/courier: \'addresses\' must contain at least one address' if @options[:addresses].empty? - c = 0 - [:ssl_certificate, :ssl_key].each do - c += 1 + if @options[:transport] == 'tls' + c = 0 + [:ssl_certificate, :ssl_key].each do + c += 1 + end + fail 'output/courier: \'ssl_certificate\' and \'ssl_key\' must be specified together' if c == 1 end - - fail 'output/courier: \'ssl_certificate\' and \'ssl_key\' must be specified together' if c == 1 end def connect(io_control) @@ -139,14 +141,18 @@ def run_send(io_control) end end return - rescue OpenSSL::SSL::SSLError, IOError, Errno::ECONNRESET => e + rescue OpenSSL::SSL::SSLError => e @logger.warn 'SSL write error', :error => e.message unless @logger.nil? io_control << ['F'] return + rescue IOError, Errno::ECONNRESET => e + @logger.warn 'Write error', :error => e.message unless @logger.nil? + io_control << ['F'] + return rescue ShutdownSignal return rescue StandardError, NativeException => e - @logger.warn e, :hint => 'Unknown SSL write error' unless @logger.nil? + @logger.warn e, :hint => 'Unknown write error' unless @logger.nil? io_control << ['F'] return end @@ -174,10 +180,14 @@ def run_recv(io_control) io_control << ['R', signature, message] end return - rescue OpenSSL::SSL::SSLError, IOError, Errno::ECONNRESET => e + rescue OpenSSL::SSL::SSLError => e @logger.warn 'SSL read error', :error => e.message unless @logger.nil? io_control << ['F'] return + rescue IOError, Errno::ECONNRESET => e + @logger.warn 'Read error', :error => e.message unless @logger.nil? + io_control << ['F'] + return rescue EOFError @logger.warn 'Connection closed by server' unless @logger.nil? io_control << ['F'] @@ -185,7 +195,7 @@ def run_recv(io_control) rescue ShutdownSignal return rescue => e - @logger.warn e, :hint => 'Unknown SSL read error' unless @logger.nil? + @logger.warn e, :hint => 'Unknown read error' unless @logger.nil? io_control << ['F'] return end @@ -200,33 +210,37 @@ def tls_connect begin tcp_socket = TCPSocket.new(address, port) - ssl = OpenSSL::SSL::SSLContext.new - - # Disable SSLv2 and SSLv3 - # Call set_params first to ensure options attribute is there (hmmmm?) - ssl.set_params - # Modify the default options to ensure SSLv2 and SSLv3 is disabled - # This retains any beneficial options set by default in the current Ruby implementation - ssl.options |= OpenSSL::SSL::OP_NO_SSLv2 if defined?(OpenSSL::SSL::OP_NO_SSLv2) - ssl.options |= OpenSSL::SSL::OP_NO_SSLv3 if defined?(OpenSSL::SSL::OP_NO_SSLv3) - - # Set the certificate file - unless @options[:ssl_certificate].nil? - ssl.cert = OpenSSL::X509::Certificate.new(File.read(@options[:ssl_certificate])) - ssl.key = OpenSSL::PKey::RSA.new(File.read(@options[:ssl_key]), @options[:ssl_key_passphrase]) - end + if @options[:transport] == 'tls' + ssl = OpenSSL::SSL::SSLContext.new + + # Disable SSLv2 and SSLv3 + # Call set_params first to ensure options attribute is there (hmmmm?) + ssl.set_params + # Modify the default options to ensure SSLv2 and SSLv3 is disabled + # This retains any beneficial options set by default in the current Ruby implementation + ssl.options |= OpenSSL::SSL::OP_NO_SSLv2 if defined?(OpenSSL::SSL::OP_NO_SSLv2) + ssl.options |= OpenSSL::SSL::OP_NO_SSLv3 if defined?(OpenSSL::SSL::OP_NO_SSLv3) + + # Set the certificate file + unless @options[:ssl_certificate].nil? + ssl.cert = OpenSSL::X509::Certificate.new(File.read(@options[:ssl_certificate])) + ssl.key = OpenSSL::PKey::RSA.new(File.read(@options[:ssl_key]), @options[:ssl_key_passphrase]) + end - cert_store = OpenSSL::X509::Store.new - cert_store.add_file(@options[:ssl_ca]) - ssl.cert_store = cert_store - ssl.verify_mode = OpenSSL::SSL::VERIFY_PEER | OpenSSL::SSL::VERIFY_FAIL_IF_NO_PEER_CERT + cert_store = OpenSSL::X509::Store.new + cert_store.add_file(@options[:ssl_ca]) + ssl.cert_store = cert_store + ssl.verify_mode = OpenSSL::SSL::VERIFY_PEER | OpenSSL::SSL::VERIFY_FAIL_IF_NO_PEER_CERT - @ssl_client = OpenSSL::SSL::SSLSocket.new(tcp_socket) + @ssl_client = OpenSSL::SSL::SSLSocket.new(tcp_socket) - socket = @ssl_client.connect + socket = @ssl_client.connect - # Verify certificate - socket.post_connection_check(address) + # Verify certificate + socket.post_connection_check(address) + else + socket = tcp_socket.connect + end # Add extra logging data now we're connected @logger['address'] = address diff --git a/lib/log-courier/server.rb b/lib/log-courier/server.rb index 7f915fee..6a8ba164 100644 --- a/lib/log-courier/server.rb +++ b/lib/log-courier/server.rb @@ -36,7 +36,7 @@ class Server def initialize(options = {}) @options = { logger: nil, - transport: 'tls' + transport: 'tls', }.merge!(options) @logger = @options[:logger] From 417c091929061597ebb2540b4eac5b35d9467edc Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 22:06:13 +0000 Subject: [PATCH 27/75] Split IO error into SSL error and non-SSL error to prevent confusing error message --- lib/log-courier/server_tcp.rb | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/lib/log-courier/server_tcp.rb b/lib/log-courier/server_tcp.rb index 8cb7f4ac..ee48fea7 100644 --- a/lib/log-courier/server_tcp.rb +++ b/lib/log-courier/server_tcp.rb @@ -264,10 +264,14 @@ def run @logger.info 'Connection closed', :peer => @peer unless @logger.nil? end return - rescue OpenSSL::SSL::SSLError, IOError, Errno::ECONNRESET => e + rescue OpenSSL::SSL::SSLError => e # Read errors, only action is to shutdown which we'll do in ensure @logger.warn 'SSL error, connection aborted', :error => e.message, :peer => @peer unless @logger.nil? return + rescue IOError, Errno::ECONNRESET => e + # Read errors, only action is to shutdown which we'll do in ensure + @logger.warn 'Connection aborted', :error => e.message, :peer => @peer unless @logger.nil? + return rescue ProtocolError => e # Connection abort request due to a protocol error @logger.warn 'Protocol error, connection aborted', :error => e.message, :peer => @peer unless @logger.nil? From f969fce2038d041d1985d0081a314bdfd03ef297 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 22 Jan 2015 22:08:56 +0000 Subject: [PATCH 28/75] Add spool flush debug logging --- lib/log-courier/client.rb | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index 4472788c..fde6346b 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -165,6 +165,12 @@ def run_spooler next if spooled.length == 0 end + if spooled.length >= @options[:spool_size] + @logger.debug 'Flushing full spool', :events => spooled.length + else + @logger.debug 'Flushing spool due to timeout', :events => spooled.length + end + # Pass through to io_control but only if we're ready to send @send_mutex.synchronize do @send_cond.wait(@send_mutex) unless @send_ready From 7daae59b49ddfc28b453cfd215ea783b13e0b3b2 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sun, 25 Jan 2015 15:13:02 +0000 Subject: [PATCH 29/75] Fix typo that crashed output plugin --- lib/log-courier/client.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index fde6346b..205368b8 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -104,7 +104,7 @@ def initialize(options = {}) case @options[:transport] when 'tcp', 'tls' require 'log-courier/client_tcp' - @server = ClientTcp.new(@options) + @client = ClientTcp.new(@options) else fail 'output/courier: \'transport\' must be tcp or tls' end From 6cd44d4a631c806310da9387bd96d2dc6ab2c42f Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sun, 25 Jan 2015 15:13:15 +0000 Subject: [PATCH 30/75] Improve documentation on reloading, log rotation and reopening of log files (fixes #99) --- docs/Configuration.md | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index c0b31c91..6014e247 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -82,11 +82,18 @@ command replacing 1234 with the Process ID of Log Courier. kill -HUP 1234 +Log Courier will reopen its own log file if one has been configured, allowing +native log rotation to take place. + Please note that files Log Courier has already started harvesting will continue -to harvest after the reload with their previous configuration. The reload -process will only affect new files and the network configuration. In the case of -a network configuration change, Log Courier will disconnect and reconnect at the -earliest opportunity. +to be harvested after the reload with their original configuration; the reload +process will only affect new files. Additionally, harvested log files will not +be reopened. Log rotations are detected automatically. To control when a +harvested log file is closed you can adjust the [`"dead time"`](#dead-time) +option. + +In the case of a network configuration change, Log Courier will disconnect and +reconnect at the earliest opportunity. *Configuration reload is not currently available on Windows builds of Log Courier.* @@ -505,7 +512,13 @@ also have [Stream Configuration](#streamconfiguration) parameters specified. *Array of Fileglobs. Required* At least one Fileglob must be specified and all matching files for all provided -globs will be tailed. +globs will be monitored. + +If the log file is rotated, Log Courier will detect this and automatically start +harvesting the new file. It will also keep the old file open to catch any +delayed writes that a still-reloading application has not yet written. You can +configure the time period before this old log file is closed using the +[`"dead time"`](#dead-time) option. See above for a description of the Fileglob field type. From 44274f884795f6398929fa331d95837c0e03ec34 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sun, 1 Feb 2015 16:40:15 +0000 Subject: [PATCH 31/75] When reporting a configuration error, report the line number and display the location on the line the error occurs Fixes #102 --- src/lc-lib/core/config.go | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/src/lc-lib/core/config.go b/src/lc-lib/core/config.go index 70b4c0cd..a4491c7b 100644 --- a/src/lc-lib/core/config.go +++ b/src/lc-lib/core/config.go @@ -217,6 +217,26 @@ func (c *Config) loadFile(path string) (stripped *bytes.Buffer, err error) { return } +// Parse a *json.SyntaxError into a pretty error message +func (c *Config) parseSyntaxError(js []byte, err error) error { + json_err, ok := err.(*json.SyntaxError) + if !ok { + return err + } + + start := bytes.LastIndex(js[:json_err.Offset], []byte("\n"))+1 + end := bytes.Index(js[start:], []byte("\n")) + if end >= 0 { + end += start + } else { + end = len(js) + } + + line, pos := bytes.Count(js[:start], []byte("\n")), int(json_err.Offset) - start - 1 + + return fmt.Errorf("%s on line %d\n%s\n%s^", err, line, js[start:end], strings.Repeat(" ", pos)) +} + // TODO: Config from a TOML? Maybe a custom one func (c *Config) Load(path string) (err error) { var data *bytes.Buffer @@ -229,7 +249,7 @@ func (c *Config) Load(path string) (err error) { // Pull the entire structure into raw_config raw_config := make(map[string]interface{}) if err = json.Unmarshal(data.Bytes(), &raw_config); err != nil { - return + return c.parseSyntaxError(data.Bytes(), err) } // Fill in defaults where the zero-value is a valid setting From 51ded68e8d065f57c1a9b5f1142db7d3dd118a23 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 2 Feb 2015 19:09:23 +0000 Subject: [PATCH 32/75] Remove use_bigdecimal JrJackson option as Logstash does not support BigDecimal A severe side effect is also that the use_bigdecimal option leaks and persists in the JrJackson library, meaning all json filter and codec operations will begin to create BigDecimal objects even without the option. Fixes #103 --- lib/log-courier/server.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/log-courier/server.rb b/lib/log-courier/server.rb index 7f915fee..d4096a70 100644 --- a/lib/log-courier/server.rb +++ b/lib/log-courier/server.rb @@ -59,7 +59,7 @@ def initialize(options = {}) # Load the json adapter @json_adapter = MultiJson.adapter.instance - @json_options = { raw: true, use_bigdecimal: true } + @json_options = { raw: true } end def run(&block) From 162c632564ea769e16fdeacdea259ad755efa045 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Mon, 2 Feb 2015 20:22:05 +0000 Subject: [PATCH 33/75] Add PR #105 to change log --- docs/ChangeLog.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index edf03775..23deb955 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -50,6 +50,7 @@ due to peer_recv_queue being exceeded (#92) error (#100) * Fix "address already in use" startup error when admin is enabled on a unix socket and the unix socket file already exists during startup (#101) +* Report the location in the configuration file of any syntax errors (#102) ***Security*** From 98f284e75cf53f6142a7e822e661139c7c042bbd Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 17 Jan 2015 17:15:20 +0000 Subject: [PATCH 34/75] Implement SRV record lookup for #85 --- docs/Configuration.md | 29 +++++++-- src/lc-lib/core/config.go | 8 +++ src/lc-lib/transports/tcp.go | 113 +++++++++++++++++++++++++++-------- 3 files changed, 119 insertions(+), 31 deletions(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index 6014e247..a90990fa 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -418,17 +418,34 @@ this slows down the rate of reconnection attempts. When using the ZMQ transport, this is how long to wait before restarting the ZMQ stack when it was reset. +### `"rfc 2782 srv"` + +*Boolean. Optional. Default: true* + +When performing SRV DNS lookups for entries in the [`"servers"`](#servers) list, +use RFC 2782 style lookups of the form `_service._proto.example.com`. + +### `"rfc 2782 service"` + +*String. Optional. Default: "courier"* + +Specifies the service to request when using RFC 2782 style SRV lookups. Using +the default, "courier", would result in a lookup for +`_courier._tcp.example.com`. + ### `"servers"` *Array of Strings. Required* -Sets the list of servers to send logs to. DNS names are resolved into IP -addresses each time connections are made and all available IP addresses are -used. +Sets the list of servers to send logs to. Accepted formats for each server entry are: + +* `ipaddress:port` +* `hostname:port` (A DNS lookup is performed) +* `@hostname` (A SRV DNS lookup is performed, with further DNS lookups if required) -Only the initial server is randomly selected. Subsequent connection attempts are -made to the next IP address available (if the server had multiple IP addresses) -or to the next server listed in the configuration file (if all addresses for the +The initial server is randomly selected. Subsequent connection attempts are made +to the next IP address available (if the server had multiple IP addresses) or to +the next server listed in the configuration file (if all addresses for the previous server were exausted.) ### `"ssl ca"` diff --git a/src/lc-lib/core/config.go b/src/lc-lib/core/config.go index a4491c7b..65eb05ca 100644 --- a/src/lc-lib/core/config.go +++ b/src/lc-lib/core/config.go @@ -46,6 +46,8 @@ const ( default_GeneralConfig_LogStdout bool = true default_GeneralConfig_LogSyslog bool = false default_NetworkConfig_Transport string = "tls" + default_NetworkConfig_Rfc2782Srv bool = true + default_NetworkConfig_Rfc2782Service string = "courier" default_NetworkConfig_Timeout time.Duration = 15 * time.Second default_NetworkConfig_Reconnect time.Duration = 1 * time.Second default_NetworkConfig_MaxPendingPayloads int64 = 10 @@ -85,6 +87,8 @@ type GeneralConfig struct { type NetworkConfig struct { Transport string `config:"transport"` Servers []string `config:"servers"` + Rfc2782Srv bool `config:"rfc 2782 srv"` + Rfc2782Service string `config:"rfc 2782 service"` Timeout time.Duration `config:"timeout"` Reconnect time.Duration `config:"reconnect"` MaxPendingPayloads int64 `config:"max pending payloads"` @@ -257,6 +261,7 @@ func (c *Config) Load(path string) (err error) { c.General.LogLevel = default_GeneralConfig_LogLevel c.General.LogStdout = default_GeneralConfig_LogStdout c.General.LogSyslog = default_GeneralConfig_LogSyslog + c.Network.Rfc2782Srv = default_NetworkConfig_Rfc2782Srv // Populate configuration - reporting errors on spelling mistakes etc. if err = c.PopulateConfig(c, "/", raw_config); err != nil { @@ -359,6 +364,9 @@ func (c *Config) Load(path string) (err error) { return } + if c.Network.Rfc2782Service == "" { + c.Network.Rfc2782Service = default_NetworkConfig_Rfc2782Service + } if c.Network.Timeout == time.Duration(0) { c.Network.Timeout = default_NetworkConfig_Timeout } diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index bc5b77b9..be9a48d9 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -31,6 +31,7 @@ import ( "math/rand" "net" "regexp" + "strconv" "sync" "time" ) @@ -75,8 +76,7 @@ type TransportTcp struct { roundrobin int host_is_ip bool host string - port string - addresses []net.IP + addresses []*net.TCPAddr } func NewTcpTransportFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.TransportFactory, error) { @@ -176,33 +176,13 @@ func (t *TransportTcp) Init() error { // Have we exhausted the address list we had? if t.addresses == nil { - var err error - - // Round robin to the next server - selected := t.net_config.Servers[t.roundrobin%len(t.net_config.Servers)] - t.roundrobin++ - - t.host, t.port, err = net.SplitHostPort(selected) - if err != nil { - return fmt.Errorf("Invalid hostport given: %s", selected) - } - - // Are we an IP? - if ip := net.ParseIP(t.host); ip != nil { - t.host_is_ip = true - t.addresses = []net.IP{ip} - } else { - // Lookup the server in DNS - t.host_is_ip = false - t.addresses, err = net.LookupIP(t.host) - if err != nil { - return fmt.Errorf("DNS lookup failure \"%s\": %s", t.host, err) - } + if err := t.populateAddresses(); err != nil { + return err } } // Try next address and drop it from our list - addressport := net.JoinHostPort(t.addresses[0].String(), t.port) + addressport := t.addresses[0].String() if len(t.addresses) > 1 { t.addresses = t.addresses[1:] } else { @@ -269,6 +249,89 @@ func (t *TransportTcp) Init() error { return nil } +func (t *TransportTcp) populateAddresses() (err error) { + // Round robin to the next server + selected := t.net_config.Servers[t.roundrobin%len(t.net_config.Servers)] + t.roundrobin++ + + t.addresses = make([]*net.TCPAddr, 0) + + // @hostname means SRV record where the host and port are in the record + if len(t.host) > 0 && t.host[0] == '@' { + var srvs []*net.SRV + var service, protocol string + + t.host_is_ip = false + + if t.net_config.Rfc2782Srv { + service, protocol = t.net_config.Rfc2782Service, "tcp" + } else { + service, protocol = "", "" + } + + _, srvs, err = net.LookupSRV(service, protocol, t.host[1:]) + if err != nil { + return fmt.Errorf("DNS SRV lookup failure \"%s\": %s", t.host, err) + } else if len(srvs) == 0 { + return fmt.Errorf("DNS SRV lookup failure \"%s\": No targets found", t.host) + } + + for _, srv := range srvs { + if _, err = t.populateLookup(srv.Target, int(srv.Port)); err != nil { + return + } + } + + return + } + + // Standard host:port declaration + var port_str string + var port uint64 + if t.host, port_str, err = net.SplitHostPort(selected); err != nil { + return fmt.Errorf("Invalid hostport given: %s", selected) + } + + if port, err = strconv.ParseUint(port_str, 10, 16); err != nil { + return fmt.Errorf("Invalid port given: %s", port_str) + } + + if t.host_is_ip, err = t.populateLookup(t.host, int(port)); err != nil { + return + } + + return nil +} + +func (t *TransportTcp) populateLookup(host string, port int) (bool, error) { + if ip := net.ParseIP(host); ip != nil { + // IP address + t.addresses = append(t.addresses, &net.TCPAddr{ + IP: ip, + Port: port, + }) + + return true, nil + } + + // Lookup the hostname in DNS + ips, err := net.LookupIP(host) + if err != nil { + return false, fmt.Errorf("DNS lookup failure \"%s\": %s", host, err) + } else if len(ips) == 0 { + return false, fmt.Errorf("DNS lookup failure \"%s\": No addresses found", host) + } + + for _, ip := range ips { + t.addresses = append(t.addresses, &net.TCPAddr{ + IP: ip, + Port: port, + }) + } + + return false, nil +} + func (t *TransportTcp) disconnect() { if t.shutdown == nil { return From 51c86e44a8e29f46baf44ea0e31c139f90fa32d5 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Tue, 3 Feb 2015 09:32:06 +0000 Subject: [PATCH 35/75] Fix an extremely rare race condition where a dead file may not resume if it updates at exactly the same moment it was flagged as dead --- src/lc-lib/harvester/harvester.go | 12 ++++++++---- src/lc-lib/prospector/info.go | 8 ++++---- 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index 62894d8e..76bb0ef4 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -31,6 +31,7 @@ import ( type HarvesterFinish struct { Last_Offset int64 Error error + Last_Stat os.FileInfo } type Harvester struct { @@ -90,6 +91,7 @@ func (h *Harvester) Start(output chan<- *core.EventDescriptor) { go func() { status := &HarvesterFinish{} status.Last_Offset, status.Error = h.harvest(output) + status.Last_Stat = h.fileinfo h.return_chan <- status }() } @@ -231,6 +233,9 @@ ReadLoop: return h.codec.Teardown(), err } + // Store latest stat() + h.fileinfo = info + if info.Size() < h.offset { log.Warning("Unexpected file truncation, seeking to beginning: %s", h.path) h.file.Seek(0, os.SEEK_SET) @@ -243,10 +248,6 @@ ReadLoop: if age := time.Since(last_read_time); age > h.stream_config.DeadTime { // if last_read_time was more than dead time, this file is probably dead. Stop watching it. log.Info("Stopping harvest of %s; last change was %v ago", h.path, age-(age%time.Second)) - // TODO: We should return a Stat() from before we attempted to read - // In prospector we use that for comparison to resume - // This prevents a potential race condition if we stop just as the - // file is modified with extra lines... return h.codec.Teardown(), nil } } @@ -328,6 +329,9 @@ func (h *Harvester) prepareHarvester() error { return fmt.Errorf("Not the same file") } + // Store latest stat() + h.fileinfo = info + // TODO: Check error? h.file.Seek(h.offset, os.SEEK_SET) diff --git a/src/lc-lib/prospector/info.go b/src/lc-lib/prospector/info.go index 6bcdbaab..9047ca26 100644 --- a/src/lc-lib/prospector/info.go +++ b/src/lc-lib/prospector/info.go @@ -90,10 +90,6 @@ func (pi *prospectorInfo) isRunning() bool { return pi.running } -/*func (pi *prospectorInfo) ShutdownSignal() <-chan interface{} { - return pi.harvester_stop -}*/ - func (pi *prospectorInfo) stop() { if !pi.running { return @@ -120,6 +116,10 @@ func (pi *prospectorInfo) setHarvesterStopped(status *harvester.HarvesterFinish) pi.status = Status_Failed pi.err = status.Error } + if status.Last_Stat != nil { + // Keep the last stat the harvester ran so we compare timestamps for potential resume + pi.identity.Update(status.Last_Stat, &pi.identity) + } pi.harvester = nil } From 14f7a94027591c6b864fc971c60dd028a3175817 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Tue, 3 Feb 2015 21:44:30 +0000 Subject: [PATCH 36/75] Change log update --- docs/ChangeLog.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 23deb955..144fbd21 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -51,6 +51,11 @@ error (#100) * Fix "address already in use" startup error when admin is enabled on a unix socket and the unix socket file already exists during startup (#101) * Report the location in the configuration file of any syntax errors (#102) +* Fix an extremely rare race condition where a dead file may not be resumed if +it is updated at the exact moment it is marked as dead +* Remove use_bigdecimal JrJackson JSON decode option as Logstash does not +support it. Also, using this option enables it globally within Logstash due to +option leakage within the JrJackson gem (#103) ***Security*** From fe376d8ef7ff545ac9724edbef43943c594723e7 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 4 Feb 2015 09:03:50 +0000 Subject: [PATCH 37/75] Fix filter codec not saving offset correctly when dead time reached or stdin EOF reached --- docs/ChangeLog.md | 2 ++ src/lc-lib/codecs/filter.go | 3 +++ 2 files changed, 5 insertions(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 144fbd21..476e59d5 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -56,6 +56,8 @@ it is updated at the exact moment it is marked as dead * Remove use_bigdecimal JrJackson JSON decode option as Logstash does not support it. Also, using this option enables it globally within Logstash due to option leakage within the JrJackson gem (#103) +* Fix filter codec not saving offset correctly when dead time reached or stdin +EOF reached (reported in #108) ***Security*** diff --git a/src/lc-lib/codecs/filter.go b/src/lc-lib/codecs/filter.go index c7b6f624..171e7980 100644 --- a/src/lc-lib/codecs/filter.go +++ b/src/lc-lib/codecs/filter.go @@ -76,6 +76,7 @@ func (c *CodecFilter) Teardown() int64 { func (c *CodecFilter) Event(start_offset int64, end_offset int64, text string) { // Only flush the event if it matches a filter var match bool + for _, matcher := range c.config.matchers { if matcher.MatchString(text) { match = true @@ -88,6 +89,8 @@ func (c *CodecFilter) Event(start_offset int64, end_offset int64, text string) { } else { c.filtered_lines++ } + + c.last_offset = end_offset } func (c *CodecFilter) Meter() { From 980f30d8d9ed5690126f5645a783e6e2b055316d Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 4 Feb 2015 09:07:42 +0000 Subject: [PATCH 38/75] Update local only installation to cover the multi_json gem dependency (Fixes #111) --- docs/LogstashIntegration.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/LogstashIntegration.md b/docs/LogstashIntegration.md index a03a5813..d262bd08 100644 --- a/docs/LogstashIntegration.md +++ b/docs/LogstashIntegration.md @@ -50,7 +50,7 @@ which will require an internet connection. cd /path/to/logstash export GEM_HOME=vendor/bundle/jruby/1.9 - java -jar vendor/jar/jruby-complete-1.7.11.jar -S gem install /path/to/log-courier-X.X.gem + java -jar vendor/jar/jruby-complete-1.7.11.jar -S gem install /path/to/the.gem The remaining step is to manually install the Logstash plugins. @@ -60,13 +60,13 @@ The remaining step is to manually install the Logstash plugins. ### Local-only Installation If you need to install the gem and plugins on a server without an internet -connection, you can download the latest ffi-rzmq-core and ffi-zmq gems from the -rubygems site, transfer them across, and install them yourself. Once they are -installed, follow the instructions for Manual Installation and the process can -be completed without an internet connection. +connection, you can download the gem dependencies from the rubygems site, +transfer them across. Follow the instructions for Manual Installation and +install the dependency gems before the Log Courier gem. * https://rubygems.org/gems/ffi-rzmq-core * https://rubygems.org/gems/ffi-rzmq +* https://rubygems.org/gems/multi_json ## Configuration From 0a83cc5de4b924bddbdb17dd09d1945517293d7b Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 4 Feb 2015 09:08:17 +0000 Subject: [PATCH 39/75] Fix typo --- docs/LogstashIntegration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/LogstashIntegration.md b/docs/LogstashIntegration.md index d262bd08..1e25e05a 100644 --- a/docs/LogstashIntegration.md +++ b/docs/LogstashIntegration.md @@ -60,7 +60,7 @@ The remaining step is to manually install the Logstash plugins. ### Local-only Installation If you need to install the gem and plugins on a server without an internet -connection, you can download the gem dependencies from the rubygems site, +connection, you can download the gem dependencies from the rubygems site and transfer them across. Follow the instructions for Manual Installation and install the dependency gems before the Log Courier gem. From 694c05964014c58038bb5d3beafadeac359ec216 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Tue, 3 Feb 2015 21:41:37 +0000 Subject: [PATCH 40/75] When reading from stdin, flush spooler at end of stdin, and wait for the publisher to finish sending events before shutting down Fixes #108 --- src/lc-lib/harvester/harvester.go | 9 +- src/lc-lib/prospector/prospector.go | 6 +- src/lc-lib/publisher/publisher.go | 14 +-- src/lc-lib/registrar/eventspool.go | 27 +++-- src/lc-lib/registrar/registrar.go | 18 ++-- src/lc-lib/spooler/spooler.go | 17 +++ src/log-courier/log-courier.go | 15 ++- src/log-courier/stdin_registrar.go | 158 ++++++++++++++++++++++++++++ 8 files changed, 229 insertions(+), 35 deletions(-) create mode 100644 src/log-courier/stdin_registrar.go diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index 76bb0ef4..f27d42b3 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -72,7 +72,6 @@ func NewHarvester(stream core.Stream, config *core.Config, stream_config *core.S ret := &Harvester{ stop_chan: make(chan interface{}), - return_chan: make(chan *HarvesterFinish, 1), stream: stream, fileinfo: fileinfo, path: path, @@ -88,11 +87,19 @@ func NewHarvester(stream core.Stream, config *core.Config, stream_config *core.S } func (h *Harvester) Start(output chan<- *core.EventDescriptor) { + if h.return_chan != nil { + h.Stop() + <-h.return_chan + } + + h.return_chan = make(chan *HarvesterFinish, 1) + go func() { status := &HarvesterFinish{} status.Last_Offset, status.Error = h.harvest(output) status.Last_Stat = h.fileinfo h.return_chan <- status + close(h.return_chan) }() } diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index dd0d4e51..ce110daa 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -41,15 +41,15 @@ type Prospector struct { from_beginning bool iteration uint32 lastscan time.Time - registrar *registrar.Registrar - registrar_spool *registrar.RegistrarEventSpool + registrar registrar.Registrator + registrar_spool registrar.EventSpooler snapshot_chan chan interface{} snapshot_sink chan []*core.Snapshot output chan<- *core.EventDescriptor } -func NewProspector(pipeline *core.Pipeline, config *core.Config, from_beginning bool, registrar_imp *registrar.Registrar, spooler_imp *spooler.Spooler) (*Prospector, error) { +func NewProspector(pipeline *core.Pipeline, config *core.Config, from_beginning bool, registrar_imp registrar.Registrator, spooler_imp *spooler.Spooler) (*Prospector, error) { ret := &Prospector{ config: config, prospectorindex: make(map[string]*prospectorInfo), diff --git a/src/lc-lib/publisher/publisher.go b/src/lc-lib/publisher/publisher.go index 1d99a84c..469f97d0 100644 --- a/src/lc-lib/publisher/publisher.go +++ b/src/lc-lib/publisher/publisher.go @@ -47,23 +47,17 @@ const ( Status_Reconnecting ) -type EventSpool interface { - Close() - Add(registrar.RegistrarEvent) - Send() -} - type NullEventSpool struct { } -func newNullEventSpool() EventSpool { +func newNullEventSpool() *NullEventSpool { return &NullEventSpool{} } func (s *NullEventSpool) Close() { } -func (s *NullEventSpool) Add(event registrar.RegistrarEvent) { +func (s *NullEventSpool) Add(event registrar.EventProcessor) { } func (s *NullEventSpool) Send() { @@ -87,7 +81,7 @@ type Publisher struct { num_payloads int64 out_of_sync int input chan []*core.EventDescriptor - registrar_spool EventSpool + registrar_spool registrar.EventSpooler shutdown bool line_count int64 retry_count int64 @@ -100,7 +94,7 @@ type Publisher struct { last_measurement time.Time } -func NewPublisher(pipeline *core.Pipeline, config *core.NetworkConfig, registrar *registrar.Registrar) (*Publisher, error) { +func NewPublisher(pipeline *core.Pipeline, config *core.NetworkConfig, registrar registrar.Registrator) (*Publisher, error) { ret := &Publisher{ config: config, input: make(chan []*core.EventDescriptor, 1), diff --git a/src/lc-lib/registrar/eventspool.go b/src/lc-lib/registrar/eventspool.go index 5cfdcabc..e432446f 100644 --- a/src/lc-lib/registrar/eventspool.go +++ b/src/lc-lib/registrar/eventspool.go @@ -20,38 +20,45 @@ import ( "github.com/driskell/log-courier/src/lc-lib/core" ) -type RegistrarEvent interface { +type EventProcessor interface { Process(state map[core.Stream]*FileState) } -type RegistrarEventSpool struct { +type EventSpooler interface { + Close() + Add(EventProcessor) + Send() +} + +type EventSpool struct { registrar *Registrar - events []RegistrarEvent + events []EventProcessor } -func newRegistrarEventSpool(r *Registrar) *RegistrarEventSpool { - ret := &RegistrarEventSpool{ +func newEventSpool(r *Registrar) *EventSpool { + ret := &EventSpool{ registrar: r, } ret.reset() return ret } -func (r *RegistrarEventSpool) Close() { +func (r *EventSpool) Close() { r.registrar.dereferenceSpooler() + r.registrar = nil } -func (r *RegistrarEventSpool) Add(event RegistrarEvent) { +func (r *EventSpool) Add(event EventProcessor) { r.events = append(r.events, event) } -func (r *RegistrarEventSpool) Send() { +func (r *EventSpool) Send() { if len(r.events) != 0 { r.registrar.registrar_chan <- r.events r.reset() } } -func (r *RegistrarEventSpool) reset() { - r.events = make([]RegistrarEvent, 0, 0) +func (r *EventSpool) reset() { + r.events = make([]EventProcessor, 0, 0) } diff --git a/src/lc-lib/registrar/registrar.go b/src/lc-lib/registrar/registrar.go index 3e51481c..f14570e6 100644 --- a/src/lc-lib/registrar/registrar.go +++ b/src/lc-lib/registrar/registrar.go @@ -29,12 +29,17 @@ import ( type LoadPreviousFunc func(string, *FileState) (core.Stream, error) +type Registrator interface { + Connect() EventSpooler + LoadPrevious(LoadPreviousFunc) (bool, error) +} + type Registrar struct { core.PipelineSegment sync.Mutex - registrar_chan chan []RegistrarEvent + registrar_chan chan []EventProcessor references int persistdir string statefile string @@ -43,7 +48,7 @@ type Registrar struct { func NewRegistrar(pipeline *core.Pipeline, persistdir string) *Registrar { ret := &Registrar{ - registrar_chan: make(chan []RegistrarEvent, 16), // TODO: Make configurable? + registrar_chan: make(chan []EventProcessor, 16), // TODO: Make configurable? persistdir: persistdir, statefile: ".log-courier", state: make(map[core.Stream]*FileState), @@ -109,22 +114,21 @@ func (r *Registrar) LoadPrevious(callback_func LoadPreviousFunc) (have_previous return } -func (r *Registrar) Connect() *RegistrarEventSpool { +func (r *Registrar) Connect() EventSpooler { r.Lock() - ret := newRegistrarEventSpool(r) + defer r.Unlock() r.references++ - r.Unlock() - return ret + return newEventSpool(r) } func (r *Registrar) dereferenceSpooler() { r.Lock() + defer r.Unlock() r.references-- if r.references == 0 { // Shutdown registrar, all references are closed close(r.registrar_chan) } - r.Unlock() } func (r *Registrar) toCanonical() (canonical map[string]*FileState) { diff --git a/src/lc-lib/spooler/spooler.go b/src/lc-lib/spooler/spooler.go index 02fe671b..5adc8b86 100644 --- a/src/lc-lib/spooler/spooler.go +++ b/src/lc-lib/spooler/spooler.go @@ -60,6 +60,10 @@ func (s *Spooler) Connect() chan<- *core.EventDescriptor { return s.input } +func (s *Spooler) Flush() { + s.input <- nil +} + func (s *Spooler) Run() { defer func() { s.Done() @@ -72,6 +76,19 @@ SpoolerLoop: for { select { case event := <-s.input: + // Nil event means flush + if event == nil { + if len(s.spool) > 0 { + log.Debug("Spooler flushing %d events due to flush event", len(s.spool)) + + if !s.sendSpool() { + break SpoolerLoop + } + } + + continue + } + if len(s.spool) > 0 && int64(s.spool_size)+int64(len(event.Event))+event_header_size >= s.config.SpoolMaxBytes { log.Debug("Spooler flushing %d events due to spool max bytes (%d/%d - next is %d)", len(s.spool), s.spool_size, s.config.SpoolMaxBytes, len(event.Event)+4) diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index b9aa627c..184644a7 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -69,7 +69,7 @@ func (lc *LogCourier) Run() { var admin_listener *admin.Listener var on_command <-chan string var harvester_wait <-chan *harvester.HarvesterFinish - var registrar_imp *registrar.Registrar + var registrar_imp registrar.Registrator lc.startUp() @@ -77,7 +77,7 @@ func (lc *LogCourier) Run() { // If reading from stdin, skip admin, and set up a null registrar if lc.stdin { - registrar_imp = nil + registrar_imp = newStdinRegistrar(lc.pipeline) } else { if lc.config.General.AdminEnabled { var err error @@ -93,12 +93,12 @@ func (lc *LogCourier) Run() { registrar_imp = registrar.NewRegistrar(lc.pipeline, lc.config.General.PersistDir) } - publisher, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar_imp) + publisher_imp, err := publisher.NewPublisher(lc.pipeline, &lc.config.Network, registrar_imp) if err != nil { log.Fatalf("Failed to initialise: %s", err) } - spooler_imp := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher) + spooler_imp := spooler.NewSpooler(lc.pipeline, &lc.config.General, publisher_imp) // If reading from stdin, don't start prospector, directly start a harvester if lc.stdin { @@ -137,6 +137,13 @@ SignalLoop: log.Notice("Finished reading from stdin at offset %d", finished.Last_Offset) } lc.harvester = nil + + // Flush the spooler + spooler_imp.Flush() + + // Wait for StdinRegistrar to finish + registrar_imp.(*StdinRegistrar).Wait(finished.Last_Offset) + lc.cleanShutdown() break SignalLoop } diff --git a/src/log-courier/stdin_registrar.go b/src/log-courier/stdin_registrar.go new file mode 100644 index 00000000..cb517faf --- /dev/null +++ b/src/log-courier/stdin_registrar.go @@ -0,0 +1,158 @@ +/* + * Copyright 2014 Jason Woods. + * + * This file is a modification of code from Logstash Forwarder. + * Copyright 2012-2013 Jordan Sissel and contributors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package main + +import ( + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "sync" +) + +type StdinRegistrar struct { + core.PipelineSegment + + sync.Mutex + + group sync.WaitGroup + registrar_chan chan []registrar.EventProcessor + signal_chan chan int64 + references int +} + +func newStdinRegistrar(pipeline *core.Pipeline) *StdinRegistrar { + ret := &StdinRegistrar{ + registrar_chan: make(chan []registrar.EventProcessor, 16), + signal_chan: make(chan int64, 1), + } + + ret.group.Add(1) + + pipeline.Register(ret) + + return ret +} + +func (r *StdinRegistrar) Run() { + defer func() { + r.Done() + r.group.Done() + }() + + var wait_offset *int64 + var last_offset int64 + + state := make(map[core.Stream]*registrar.FileState) + state[nil] = ®istrar.FileState{} + +RegistrarLoop: + for { + select { + case signal := <-r.signal_chan: + if last_offset == signal { + break RegistrarLoop + } + + wait_offset = new(int64) + *wait_offset = signal + + log.Debug("Registrar received stdin EOF offset of %d", *wait_offset) + case events := <-r.registrar_chan: + for _, event := range events { + event.Process(state) + } + + log.Debug("-- %v", state) + + if wait_offset != nil && state[nil].Offset >= *wait_offset { + log.Debug("Registrar has reached end of stdin", state[nil].Offset) + break RegistrarLoop + } + + last_offset = state[nil].Offset + case <-r.OnShutdown(): + break RegistrarLoop + } + } + + log.Info("Registrar exiting") +} + +func (r *StdinRegistrar) Connect() registrar.EventSpooler { + r.Lock() + defer r.Unlock() + r.references++ + return newStdinEventSpool(r) +} + +func (r *StdinRegistrar) Wait(offset int64) { + r.signal_chan <- offset + r.group.Wait() +} + +func (r *StdinRegistrar) LoadPrevious(registrar.LoadPreviousFunc) (bool, error) { + return false, nil +} + +func (r *StdinRegistrar) dereferenceSpooler() { + r.Lock() + defer r.Unlock() + r.references-- + if r.references == 0 { + close(r.registrar_chan) + } +} + +type StdinEventSpool struct { + registrar *StdinRegistrar + events []registrar.EventProcessor +} + +func newStdinEventSpool(r *StdinRegistrar) *StdinEventSpool { + ret := &StdinEventSpool{ + registrar: r, + } + ret.reset() + return ret +} + +func (r *StdinEventSpool) Close() { + r.registrar.dereferenceSpooler() + r.registrar = nil +} + +func (r *StdinEventSpool) Add(event registrar.EventProcessor) { + // StdinEventSpool is only interested in AckEvents + if _, ok := event.(*registrar.AckEvent); !ok { + return + } + + r.events = append(r.events, event) +} + +func (r *StdinEventSpool) Send() { + if len(r.events) != 0 { + r.registrar.registrar_chan <- r.events + r.reset() + } +} + +func (r *StdinEventSpool) reset() { + r.events = make([]registrar.EventProcessor, 0, 0) +} From 34ebab643d82d3fab5019a6cdbea0fdf7cebd403 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 4 Feb 2015 09:01:53 +0000 Subject: [PATCH 41/75] Fixup debug messages --- src/log-courier/stdin_registrar.go | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/src/log-courier/stdin_registrar.go b/src/log-courier/stdin_registrar.go index cb517faf..50926c43 100644 --- a/src/log-courier/stdin_registrar.go +++ b/src/log-courier/stdin_registrar.go @@ -72,16 +72,14 @@ RegistrarLoop: wait_offset = new(int64) *wait_offset = signal - log.Debug("Registrar received stdin EOF offset of %d", *wait_offset) + log.Debug("Stdin registrar received stdin EOF offset of %d", *wait_offset) case events := <-r.registrar_chan: for _, event := range events { event.Process(state) } - log.Debug("-- %v", state) - if wait_offset != nil && state[nil].Offset >= *wait_offset { - log.Debug("Registrar has reached end of stdin", state[nil].Offset) + log.Debug("Stdin registrar has reached end of stdin") break RegistrarLoop } @@ -91,7 +89,7 @@ RegistrarLoop: } } - log.Info("Registrar exiting") + log.Info("Stdin registrar exiting") } func (r *StdinRegistrar) Connect() registrar.EventSpooler { From 625f0072b86659002251012e8182e80c97923e29 Mon Sep 17 00:00:00 2001 From: mheese Date: Sat, 7 Feb 2015 15:15:43 -0800 Subject: [PATCH 42/75] Update server_zmq.rb I feel almost stupid to send in a pull request for this: There is a typo in line 88: "rZMQ::Util.error_string" instead of "ZMQ::Util.error_string". --- lib/log-courier/server_zmq.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/log-courier/server_zmq.rb b/lib/log-courier/server_zmq.rb index e9ad345c..363a4a71 100644 --- a/lib/log-courier/server_zmq.rb +++ b/lib/log-courier/server_zmq.rb @@ -85,7 +85,7 @@ def initialize(options = {}) bind = 'tcp://' + @options[:address] + (@options[:port] == 0 ? ':*' : ':' + @options[:port].to_s) rc = @socket.bind(bind) - fail 'failed to bind at ' + bind + ': ' + rZMQ::Util.error_string unless ZMQ::Util.resultcode_ok?(rc) + fail 'failed to bind at ' + bind + ': ' + ZMQ::Util.error_string unless ZMQ::Util.resultcode_ok?(rc) # Lookup port number that was allocated in case it was set to 0 endpoint = '' From 2ef1c714aad63f1476a81f5df605c0ee799ab2f3 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 11 Feb 2015 09:19:17 +0000 Subject: [PATCH 43/75] Implement SRV record for both ZMQ and TCP using a shared address pool library TODO: Tests --- src/lc-lib/transports/address_pool.go | 177 ++++++++++++++++++++++++++ src/lc-lib/transports/tcp.go | 137 +++----------------- src/lc-lib/transports/zmq.go | 39 +++--- 3 files changed, 212 insertions(+), 141 deletions(-) create mode 100644 src/lc-lib/transports/address_pool.go diff --git a/src/lc-lib/transports/address_pool.go b/src/lc-lib/transports/address_pool.go new file mode 100644 index 00000000..48d3a3c4 --- /dev/null +++ b/src/lc-lib/transports/address_pool.go @@ -0,0 +1,177 @@ +/* + * Copyright 2014 Jason Woods. + * + * This file is a modification of code from Logstash Forwarder. + * Copyright 2012-2013 Jordan Sissel and contributors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package transports + +import ( + "fmt" + "net" + "math/rand" + "strconv" + "time" +) + +type AddressPool struct { + servers []string + rfc2782 bool + rfc2782Service string + roundrobin int + host_is_ip bool + host string + addresses []*net.TCPAddr +} + +func init() { + rand.Seed(time.Now().UnixNano()) +} + +func NewAddressPool(servers []string) *AddressPool { + ret := &AddressPool{ + servers: servers, + } + + // Randomise the initial host - after this it will round robin + // Round robin after initial attempt ensures we don't retry same host twice, + // and also ensures we try all hosts one by one + ret.roundrobin = rand.Intn(len(servers)) + + return ret +} + +func (p *AddressPool) SetRfc2782(enabled bool, service string) { + p.rfc2782 = enabled + p.rfc2782Service = service +} + +func (p *AddressPool) IsLast() bool { + return p.addresses == nil && p.roundrobin%len(p.servers) == 0 +} + +func (p *AddressPool) Next() (*net.TCPAddr, string, error) { + // Have we exhausted the address list we had? + if p.addresses == nil { + if err := p.populateAddresses(); err != nil { + return nil, "", err + } + } + + next := p.addresses[0] + if len(p.addresses) > 1 { + p.addresses = p.addresses[1:] + } else { + p.addresses = nil + } + + var desc string + if p.host_is_ip { + desc = fmt.Sprintf("%s", next) + } else { + desc = fmt.Sprintf("%s (%s)", next, p.host) + } + + return next, desc, nil +} + +func (p *AddressPool) Host() string { + return p.host +} + +func (p *AddressPool) populateAddresses() (err error) { + // Round robin to the next server + selected := p.servers[p.roundrobin%len(p.servers)] + p.roundrobin++ + + p.addresses = make([]*net.TCPAddr, 0) + + // @hostname means SRV record where the host and port are in the record + if len(selected) > 0 && selected[0] == '@' { + var srvs []*net.SRV + var service, protocol string + + p.host = selected[1:] + p.host_is_ip = false + + if p.rfc2782 { + service, protocol = p.rfc2782Service, "tcp" + } else { + service, protocol = "", "" + } + + _, srvs, err = net.LookupSRV(service, protocol, p.host) + if err != nil { + return fmt.Errorf("DNS SRV lookup failure \"%s\": %s", p.host, err) + } else if len(srvs) == 0 { + return fmt.Errorf("DNS SRV lookup failure \"%s\": No targets found", p.host) + } + + for _, srv := range srvs { + if _, err = p.populateLookup(srv.Target, int(srv.Port)); err != nil { + return + } + } + + return + } + + // Standard host:port declaration + var port_str string + var port uint64 + if p.host, port_str, err = net.SplitHostPort(selected); err != nil { + return fmt.Errorf("Invalid hostport given: %s", selected) + } + + if port, err = strconv.ParseUint(port_str, 10, 16); err != nil { + return fmt.Errorf("Invalid port given: %s", port_str) + } + + if p.host_is_ip, err = p.populateLookup(p.host, int(port)); err != nil { + return + } + + return nil +} + +func (p *AddressPool) populateLookup(host string, port int) (bool, error) { + if ip := net.ParseIP(host); ip != nil { + // IP address + p.addresses = append(p.addresses, &net.TCPAddr{ + IP: ip, + Port: port, + }) + + return true, nil + } + + // Lookup the hostname in DNS + ips, err := net.LookupIP(host) + if err != nil { + return false, fmt.Errorf("DNS lookup failure \"%s\": %s", host, err) + } else if len(ips) == 0 { + return false, fmt.Errorf("DNS lookup failure \"%s\": No addresses found", host) + } + + for _, ip := range ips { + p.addresses = append(p.addresses, &net.TCPAddr{ + IP: ip, + Port: port, + }) + } + + return false, nil +} diff --git a/src/lc-lib/transports/tcp.go b/src/lc-lib/transports/tcp.go index be9a48d9..23639890 100644 --- a/src/lc-lib/transports/tcp.go +++ b/src/lc-lib/transports/tcp.go @@ -28,10 +28,8 @@ import ( "fmt" "github.com/driskell/log-courier/src/lc-lib/core" "io/ioutil" - "math/rand" "net" "regexp" - "strconv" "sync" "time" ) @@ -73,10 +71,7 @@ type TransportTcp struct { can_send chan int - roundrobin int - host_is_ip bool - host string - addresses []*net.TCPAddr + addressPool *AddressPool } func NewTcpTransportFactory(config *core.Config, config_path string, unused map[string]interface{}, name string) (core.TransportFactory, error) { @@ -139,14 +134,14 @@ func NewTcpTransportFactory(config *core.Config, config_path string, unused map[ func (f *TransportTcpFactory) NewTransport(config *core.NetworkConfig) (core.Transport, error) { ret := &TransportTcp{ - config: f, - net_config: config, + config: f, + net_config: config, + addressPool: NewAddressPool(config.Servers), } - // Randomise the initial host - after this it will round robin - // Round robin after initial attempt ensures we don't retry same host twice, - // and also ensures we try all hosts one by one - ret.roundrobin = rand.Intn(len(config.Servers)) + if ret.net_config.Rfc2782Srv { + ret.addressPool.SetRfc2782(true, ret.net_config.Rfc2782Service) + } return ret, nil } @@ -165,6 +160,11 @@ func (t *TransportTcp) ReloadConfig(new_net_config *core.NetworkConfig) int { // Publisher handles changes to net_config, but ensure we store the latest in case it asks for a reconnect t.net_config = new_net_config + t.addressPool = NewAddressPool(t.net_config.Servers) + + if t.net_config.Rfc2782Srv { + t.addressPool.SetRfc2782(true, t.net_config.Rfc2782Service) + } return core.Reload_None } @@ -174,31 +174,15 @@ func (t *TransportTcp) Init() error { t.disconnect() } - // Have we exhausted the address list we had? - if t.addresses == nil { - if err := t.populateAddresses(); err != nil { - return err - } - } - - // Try next address and drop it from our list - addressport := t.addresses[0].String() - if len(t.addresses) > 1 { - t.addresses = t.addresses[1:] - } else { - t.addresses = nil - } - - var desc string - if t.host_is_ip { - desc = fmt.Sprintf("%s", addressport) - } else { - desc = fmt.Sprintf("%s (%s)", addressport, t.host) + // Try next address + addressport, desc, err := t.addressPool.Next() + if err != nil { + return err } log.Info("Attempting to connect to %s", desc) - tcpsocket, err := net.DialTimeout("tcp", addressport, t.net_config.Timeout) + tcpsocket, err := net.DialTimeout("tcp", addressport.String(), t.net_config.Timeout) if err != nil { return fmt.Errorf("Failed to connect to %s: %s", desc, err) } @@ -209,7 +193,7 @@ func (t *TransportTcp) Init() error { t.config.tls_config.MinVersion = tls.VersionTLS10 // Set the tlsconfig server name for server validation (required since Go 1.3) - t.config.tls_config.ServerName = t.host + t.config.tls_config.ServerName = t.addressPool.Host() t.tlssocket = tls.Client(&transportTcpWrap{transport: t, tcpsocket: tcpsocket}, &t.config.tls_config) t.tlssocket.SetDeadline(time.Now().Add(t.net_config.Timeout)) @@ -249,89 +233,6 @@ func (t *TransportTcp) Init() error { return nil } -func (t *TransportTcp) populateAddresses() (err error) { - // Round robin to the next server - selected := t.net_config.Servers[t.roundrobin%len(t.net_config.Servers)] - t.roundrobin++ - - t.addresses = make([]*net.TCPAddr, 0) - - // @hostname means SRV record where the host and port are in the record - if len(t.host) > 0 && t.host[0] == '@' { - var srvs []*net.SRV - var service, protocol string - - t.host_is_ip = false - - if t.net_config.Rfc2782Srv { - service, protocol = t.net_config.Rfc2782Service, "tcp" - } else { - service, protocol = "", "" - } - - _, srvs, err = net.LookupSRV(service, protocol, t.host[1:]) - if err != nil { - return fmt.Errorf("DNS SRV lookup failure \"%s\": %s", t.host, err) - } else if len(srvs) == 0 { - return fmt.Errorf("DNS SRV lookup failure \"%s\": No targets found", t.host) - } - - for _, srv := range srvs { - if _, err = t.populateLookup(srv.Target, int(srv.Port)); err != nil { - return - } - } - - return - } - - // Standard host:port declaration - var port_str string - var port uint64 - if t.host, port_str, err = net.SplitHostPort(selected); err != nil { - return fmt.Errorf("Invalid hostport given: %s", selected) - } - - if port, err = strconv.ParseUint(port_str, 10, 16); err != nil { - return fmt.Errorf("Invalid port given: %s", port_str) - } - - if t.host_is_ip, err = t.populateLookup(t.host, int(port)); err != nil { - return - } - - return nil -} - -func (t *TransportTcp) populateLookup(host string, port int) (bool, error) { - if ip := net.ParseIP(host); ip != nil { - // IP address - t.addresses = append(t.addresses, &net.TCPAddr{ - IP: ip, - Port: port, - }) - - return true, nil - } - - // Lookup the hostname in DNS - ips, err := net.LookupIP(host) - if err != nil { - return false, fmt.Errorf("DNS lookup failure \"%s\": %s", host, err) - } else if len(ips) == 0 { - return false, fmt.Errorf("DNS lookup failure \"%s\": No addresses found", host) - } - - for _, ip := range ips { - t.addresses = append(t.addresses, &net.TCPAddr{ - IP: ip, - Port: port, - }) - } - - return false, nil -} - func (t *TransportTcp) disconnect() { if t.shutdown == nil { return @@ -495,8 +396,6 @@ func (t *TransportTcp) Shutdown() { // Register the transports func init() { - rand.Seed(time.Now().UnixNano()) - core.RegisterTransport("tcp", NewTcpTransportFactory) core.RegisterTransport("tls", NewTcpTransportFactory) } diff --git a/src/lc-lib/transports/zmq.go b/src/lc-lib/transports/zmq.go index c1789cf4..0d61ebf5 100644 --- a/src/lc-lib/transports/zmq.go +++ b/src/lc-lib/transports/zmq.go @@ -25,7 +25,6 @@ import ( "fmt" zmq "github.com/alecthomas/gozmq" "github.com/driskell/log-courier/src/lc-lib/core" - "net" "regexp" "runtime" "sync" @@ -297,34 +296,30 @@ func (t *TransportZmq) Init() (err error) { } // Register endpoints + pool := NewAddressPool(t.net_config.Servers) endpoints := 0 - for _, hostport := range t.net_config.Servers { - submatch := t.config.hostport_re.FindSubmatch([]byte(hostport)) - if submatch == nil { - log.Warning("Invalid host:port given: %s", hostport) - continue - } - // Lookup the server in DNS (if this is IP it will implicitly return) - host := string(submatch[1]) - port := string(submatch[2]) - addresses, err := net.LookupHost(host) + if t.net_config.Rfc2782Srv { + pool.SetRfc2782(true, t.net_config.Rfc2782Service) + } + + for { + addressport, desc, err := pool.Next() if err != nil { - log.Warning("DNS lookup failure \"%s\": %s", host, err) - continue + return err } - // Register each address - for _, address := range addresses { - addressport := net.JoinHostPort(address, port) + if err = t.dealer.Connect("tcp://" + addressport.String()); err != nil { + log.Warning("Failed to register %s with ZMQ, skipping", desc) + goto NextAddress + } - if err = t.dealer.Connect("tcp://" + addressport); err != nil { - log.Warning("Failed to register %s (%s) with ZMQ, skipping", addressport, host) - continue - } + log.Info("Registered %s with ZMQ", desc) + endpoints++ - log.Info("Registered %s (%s) with ZMQ", addressport, host) - endpoints++ + NextAddress: + if pool.IsLast() { + break } } From 729997caba31ccee5823d72a4cee3ea5bb140131 Mon Sep 17 00:00:00 2001 From: Ted Timmons Date: Wed, 11 Feb 2015 09:18:35 -0800 Subject: [PATCH 44/75] fix incorrect logic on log level documentation --- docs/Configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index 6014e247..729ebd28 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -268,7 +268,7 @@ instead of the system FQDN. Available values: "critical", "error", "warning", "notice", "info", "debug"* *Requires restart* -The maximum level of detail to produce in Log Courier's internal log. +The minimum level of detail to produce in Log Courier's internal log. ### `"log stdout"` From 4943750ee5e260c158c4cd1d7c3fce0ddfdb9d1d Mon Sep 17 00:00:00 2001 From: Don Johnson Date: Mon, 16 Feb 2015 13:05:49 -0800 Subject: [PATCH 45/75] Update json syntax in sample include config I blindly copied this and spent about 10 minutes hunting a bug that didn't exist :) --- docs/Configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index 000cf660..3af5a474 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -513,5 +513,5 @@ following. [ { "paths": [ "/var/log/httpd/access.log" ], - "fields": [ "type": "access_log" ] + "fields": { "type": "access_log" } } ] From 3c8d1d39059b81732d8cfc18e4c4efaf861a8c51 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Wed, 18 Feb 2015 09:44:09 +0000 Subject: [PATCH 46/75] Fix hanging stdin when filter codec in use --- src/lc-lib/harvester/harvester.go | 10 ++++++---- src/lc-lib/prospector/info.go | 4 +++- src/log-courier/log-courier.go | 10 +++++----- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index f27d42b3..0bb03723 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -29,9 +29,10 @@ import ( ) type HarvesterFinish struct { - Last_Offset int64 - Error error - Last_Stat os.FileInfo + Last_Event_Offset int64 + Last_Read_Offset int64 + Error error + Last_Stat os.FileInfo } type Harvester struct { @@ -96,7 +97,8 @@ func (h *Harvester) Start(output chan<- *core.EventDescriptor) { go func() { status := &HarvesterFinish{} - status.Last_Offset, status.Error = h.harvest(output) + status.Last_Event_Offset, status.Error = h.harvest(output) + status.Last_Read_Offset = h.offset status.Last_Stat = h.fileinfo h.return_chan <- status close(h.return_chan) diff --git a/src/lc-lib/prospector/info.go b/src/lc-lib/prospector/info.go index 9047ca26..98ac6eaf 100644 --- a/src/lc-lib/prospector/info.go +++ b/src/lc-lib/prospector/info.go @@ -111,7 +111,9 @@ func (pi *prospectorInfo) getSnapshot() *core.Snapshot { func (pi *prospectorInfo) setHarvesterStopped(status *harvester.HarvesterFinish) { pi.running = false - pi.finish_offset = status.Last_Offset + // Resume harvesting from the last event offset, not the last read, to allow codec to read from the last event + // This ensures multiline codec populates correctly on resume + pi.finish_offset = status.Last_Event_Offset if status.Error != nil { pi.status = Status_Failed pi.err = status.Error diff --git a/src/log-courier/log-courier.go b/src/log-courier/log-courier.go index 184644a7..1a6b3d85 100644 --- a/src/log-courier/log-courier.go +++ b/src/log-courier/log-courier.go @@ -132,17 +132,17 @@ SignalLoop: admin_listener.Respond(lc.processCommand(command)) case finished := <-harvester_wait: if finished.Error != nil { - log.Notice("An error occurred reading from stdin at offset %d: %s", finished.Last_Offset, finished.Error) + log.Notice("An error occurred reading from stdin at offset %d: %s", finished.Last_Read_Offset, finished.Error) } else { - log.Notice("Finished reading from stdin at offset %d", finished.Last_Offset) + log.Notice("Finished reading from stdin at offset %d", finished.Last_Read_Offset) } lc.harvester = nil // Flush the spooler spooler_imp.Flush() - // Wait for StdinRegistrar to finish - registrar_imp.(*StdinRegistrar).Wait(finished.Last_Offset) + // Wait for StdinRegistrar to receive ACK for the last event we sent + registrar_imp.(*StdinRegistrar).Wait(finished.Last_Event_Offset) lc.cleanShutdown() break SignalLoop @@ -326,7 +326,7 @@ func (lc *LogCourier) cleanShutdown() { if lc.harvester != nil { lc.harvester.Stop() finished := <-lc.harvester.OnFinish() - log.Notice("Aborted reading from stdin at offset %d", finished.Last_Offset) + log.Notice("Aborted reading from stdin at offset %d", finished.Last_Read_Offset) } lc.pipeline.Shutdown() From 0ebcaec3cc4814d0c5438255b0637e0f7e606c6f Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 21 Feb 2015 09:03:01 +0000 Subject: [PATCH 47/75] Fix a plugin unknown error reported in #118 #118 also reports a plugin shutdown error - that is not fixed here. --- lib/log-courier/server_tcp.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/log-courier/server_tcp.rb b/lib/log-courier/server_tcp.rb index 8cb7f4ac..567ec745 100644 --- a/lib/log-courier/server_tcp.rb +++ b/lib/log-courier/server_tcp.rb @@ -144,7 +144,7 @@ def run(&block) client = @server.accept rescue EOFError, OpenSSL::SSL::SSLError, IOError => e # Accept failure or other issue - @logger.warn 'Connection failed to accept', :error => e.message, :peer => @tcp_server.peer unless @logger.nil + @logger.warn 'Connection failed to accept', :error => e.message, :peer => @tcp_server.peer unless @logger.nil? client.close rescue nil unless client.nil? next end From 9768d420f40919461db7a963bf75eb63169ac0ff Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 21 Feb 2015 10:51:58 +0000 Subject: [PATCH 48/75] Convert string "tags" entries to an array Fixes #118 --- lib/logstash/inputs/courier.rb | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/logstash/inputs/courier.rb b/lib/logstash/inputs/courier.rb index 8aaf22dc..0efc41c7 100644 --- a/lib/logstash/inputs/courier.rb +++ b/lib/logstash/inputs/courier.rb @@ -108,6 +108,7 @@ def register def run(output_queue) @log_courier.run do |event| + event['tags'] = [event['tags']] if event.has_key?('tags') && !event['tags'].is_a?(Array) event = LogStash::Event.new(event) decorate event output_queue << event From 0289b258e992b38646fd5196665d3c36744498c9 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Tue, 24 Feb 2015 10:13:57 +0000 Subject: [PATCH 49/75] Fix repeating prospector errors Fixes #119 --- src/lc-lib/prospector/prospector.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index dd0d4e51..d536b2e2 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -207,7 +207,7 @@ func (p *Prospector) scan(path string, config *core.FileConfig) { if info.status != Status_Invalid { // The current entry is not an error, orphan it so we can log one info.orphaned = Orphaned_Yes - } else if info.err != err { + } else if info.err.Error() != err.Error() { // The error is different, remove this entry we'll log a new one delete(p.prospectors, info) } else { From cf832eef2edc86324a15e72d07709b395add463d Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Tue, 24 Feb 2015 15:59:42 +0000 Subject: [PATCH 50/75] Fix a registrar conflict error Fix registrar conflict that can occur if prospector Stat fails. Fixes #112 --- src/lc-lib/prospector/prospector.go | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/src/lc-lib/prospector/prospector.go b/src/lc-lib/prospector/prospector.go index d536b2e2..ec03cde3 100644 --- a/src/lc-lib/prospector/prospector.go +++ b/src/lc-lib/prospector/prospector.go @@ -206,11 +206,8 @@ func (p *Prospector) scan(path string, config *core.FileConfig) { if is_known { if info.status != Status_Invalid { // The current entry is not an error, orphan it so we can log one - info.orphaned = Orphaned_Yes - } else if info.err.Error() != err.Error() { - // The error is different, remove this entry we'll log a new one - delete(p.prospectors, info) - } else { + info.orphaned = Orphaned_Maybe + } else if info.err.Error() == err.Error() { // The same error occurred - don't log it again info.update(nil, p.iteration) continue @@ -345,9 +342,6 @@ func (p *Prospector) flagDuplicateError(file string, info *prospectorInfo) { return } } - - // Remove the old info - delete(p.prospectors, info) } // Flag duplicate error and save it From a99c2f86cf5a6914ca4739665d854059777b289b Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Thu, 26 Feb 2015 23:44:01 +0000 Subject: [PATCH 51/75] Address pool improvements and ZMQ support (plain)ZMQ transport support for address pool to allow shared utilisation of SRV record syntax Improvements to address pool randomisation Add test suite for address pool --- src/lc-lib/transports/address_pool.go | 83 +++++--- src/lc-lib/transports/address_pool_test.go | 231 +++++++++++++++++++++ src/lc-lib/transports/zmq.go | 10 +- 3 files changed, 294 insertions(+), 30 deletions(-) create mode 100644 src/lc-lib/transports/address_pool_test.go diff --git a/src/lc-lib/transports/address_pool.go b/src/lc-lib/transports/address_pool.go index 48d3a3c4..719a5467 100644 --- a/src/lc-lib/transports/address_pool.go +++ b/src/lc-lib/transports/address_pool.go @@ -49,7 +49,10 @@ func NewAddressPool(servers []string) *AddressPool { // Randomise the initial host - after this it will round robin // Round robin after initial attempt ensures we don't retry same host twice, // and also ensures we try all hosts one by one - ret.roundrobin = rand.Intn(len(servers)) + rnd := rand.Intn(len(servers)) + if rnd != 0 { + ret.servers = append(append(make([]string, 0), servers[rnd:]...), servers[:rnd]...) + } return ret } @@ -60,13 +63,19 @@ func (p *AddressPool) SetRfc2782(enabled bool, service string) { } func (p *AddressPool) IsLast() bool { - return p.addresses == nil && p.roundrobin%len(p.servers) == 0 + return p.addresses == nil +} + +func (p *AddressPool) IsLastServer() bool { + return p.roundrobin%len(p.servers) == 0 } func (p *AddressPool) Next() (*net.TCPAddr, string, error) { // Have we exhausted the address list we had? if p.addresses == nil { + p.addresses = make([]*net.TCPAddr, 0) if err := p.populateAddresses(); err != nil { + p.addresses = nil return nil, "", err } } @@ -88,50 +97,52 @@ func (p *AddressPool) Next() (*net.TCPAddr, string, error) { return next, desc, nil } +func (p *AddressPool) NextServer() (string, error) { + // Round robin to the next server + selected := p.servers[p.roundrobin%len(p.servers)] + p.roundrobin++ + + // @hostname means SRV record where the host and port are in the record + if len(selected) > 0 && selected[0] == '@' { + srvs, err := p.processSrv(selected[1:]) + if err != nil { + return "", err + } + return net.JoinHostPort(srvs[0].Target, strconv.FormatUint(uint64(srvs[0].Port), 10)), nil + } + + return selected, nil +} + func (p *AddressPool) Host() string { return p.host } -func (p *AddressPool) populateAddresses() (err error) { +func (p *AddressPool) populateAddresses() (error) { // Round robin to the next server selected := p.servers[p.roundrobin%len(p.servers)] p.roundrobin++ - p.addresses = make([]*net.TCPAddr, 0) - // @hostname means SRV record where the host and port are in the record if len(selected) > 0 && selected[0] == '@' { - var srvs []*net.SRV - var service, protocol string - - p.host = selected[1:] - p.host_is_ip = false - - if p.rfc2782 { - service, protocol = p.rfc2782Service, "tcp" - } else { - service, protocol = "", "" - } - - _, srvs, err = net.LookupSRV(service, protocol, p.host) + srvs, err := p.processSrv(selected[1:]) if err != nil { - return fmt.Errorf("DNS SRV lookup failure \"%s\": %s", p.host, err) - } else if len(srvs) == 0 { - return fmt.Errorf("DNS SRV lookup failure \"%s\": No targets found", p.host) + return err } for _, srv := range srvs { - if _, err = p.populateLookup(srv.Target, int(srv.Port)); err != nil { - return + if _, err := p.populateLookup(srv.Target, int(srv.Port)); err != nil { + return err } } - return + return nil } // Standard host:port declaration var port_str string var port uint64 + var err error if p.host, port_str, err = net.SplitHostPort(selected); err != nil { return fmt.Errorf("Invalid hostport given: %s", selected) } @@ -141,12 +152,34 @@ func (p *AddressPool) populateAddresses() (err error) { } if p.host_is_ip, err = p.populateLookup(p.host, int(port)); err != nil { - return + return err } return nil } +func (p *AddressPool) processSrv(server string) ([]*net.SRV, error) { + var service, protocol string + + p.host = server + p.host_is_ip = false + + if p.rfc2782 { + service, protocol = p.rfc2782Service, "tcp" + } else { + service, protocol = "", "" + } + + _, srvs, err := net.LookupSRV(service, protocol, p.host) + if err != nil { + return nil, fmt.Errorf("DNS SRV lookup failure \"%s\": %s", p.host, err) + } else if len(srvs) == 0 { + return nil, fmt.Errorf("DNS SRV lookup failure \"%s\": No targets found", p.host) + } + + return srvs, nil +} + func (p *AddressPool) populateLookup(host string, port int) (bool, error) { if ip := net.ParseIP(host); ip != nil { // IP address diff --git a/src/lc-lib/transports/address_pool_test.go b/src/lc-lib/transports/address_pool_test.go new file mode 100644 index 00000000..68e61a37 --- /dev/null +++ b/src/lc-lib/transports/address_pool_test.go @@ -0,0 +1,231 @@ +/* + * Copyright 2014 Jason Woods. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package transports + +import ( + "testing" +) + +func TestAddressPoolIP(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"127.0.0.1:1234"}) + + addr, desc, err := pool.Next() + + // Should have succeeeded + if err != nil { + t.Error("Address pool did not parse IP correctly: ", err) + } else if addr == nil { + t.Error("Address pool did not returned nil addr") + } else if desc != "127.0.0.1:1234" { + t.Error("Address pool did not return correct desc: ", desc) + } else if addr.String() != "127.0.0.1:1234" { + t.Error("Address pool did not return correct addr: ", addr.String()) + } +} + +func TestAddressPoolHost(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"google-public-dns-a.google.com:555"}) + + addr, desc, err := pool.Next() + + if err != nil { + t.Error("Address pool did not parse Host correctly: ", err) + } else if addr == nil { + t.Error("Address pool did not returned nil addr") + } else if desc != "8.8.8.8:555 (google-public-dns-a.google.com)" { + t.Error("Address pool did not return correct desc: ", desc) + } else if addr.String() != "8.8.8.8:555" { + t.Error("Address pool did not return correct addr: ", addr.String()) + } +} + +func TestAddressPoolHostMultiple(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"google.com:555"}) + + for i := 0; i < 2; i++ { + addr, _, err := pool.Next() + + // Should have succeeeded + if err != nil { + t.Error("Address pool did not parse Host correctly: ", err) + } else if addr == nil { + t.Error("Address pool did not returned nil addr") + } + + if i == 0 { + if pool.IsLast() { + t.Error("Address pool did not return multiple addresses") + } + } + } +} + +func TestAddressPoolSrv(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"@_xmpp-server._tcp.google.com"}) + + addr, _, err := pool.Next() + + // Should have succeeeded + if err != nil { + t.Error("Address pool did not parse SRV correctly: ", err) + } else if addr == nil { + t.Error("Address pool did not returned nil addr") + } +} + +func TestAddressPoolSrvRfc(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"@google.com"}) + pool.SetRfc2782(true, "xmpp-server") + + addr, _, err := pool.Next() + + // Should have succeeeded + if err != nil { + t.Error("Address pool did not parse RFC SRV correctly: ", err) + } else if addr == nil { + t.Error("Address pool did not returned nil addr") + } +} + +func TestAddressPoolInvalid(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"127.0..0:1234"}) + + _, _, err := pool.Next() + + // Should have failed + if err == nil { + t.Logf("Address pool did not return failure correctly") + t.FailNow() + } +} + +func TestAddressPoolHostFailure(t *testing.T) { + // Test failures when parsing + pool := NewAddressPool([]string{"google-public-dns-not-exist.google.com:1234"}) + + _, _, err := pool.Next() + + // Should have failed + if err == nil { + t.Logf("Address pool did not return failure correctly") + t.FailNow() + } +} + +func TestAddressPoolIsLast(t *testing.T) { + // Test that IsLastServer works correctly + pool := NewAddressPool([]string{"google.com:1234"}) + + // Should report as last + if !pool.IsLast() { + t.Error("Address pool IsLast did not return correctly") + } + + for i := 0; i <= 42; i++ { + _, _, err := pool.Next() + + // Should succeed + if err != nil { + t.Error("Address pool did not parse Host correctly") + } + + if i <= 1 { + // Should not report as last + if pool.IsLast() { + t.Error("Address pool IsLast did not return correctly") + } + + continue + } + + // Wait until last + if pool.IsLast() { + return + } + } + + // Hit 42 servers without hitting last + t.Error("Address pool IsLast did not return correctly") +} + +func TestAddressPoolIsLastServer(t *testing.T) { + // Test that IsLastServer works correctly + pool := NewAddressPool([]string{"127.0.0.1:1234", "127.0.0.1:1234", "127.0.0.1:1234"}) + + // Should report as last server + if !pool.IsLastServer() { + t.Error("Address pool IsLastServer did not return correctly") + } + + for i := 0; i < 3; i++ { + _, _, err := pool.Next() + + // Should succeed + if err != nil { + t.Error("Address pool did not parse IP correctly") + } + + if i < 2 { + // Should not report as last server + if pool.IsLastServer() { + t.Error("Address pool IsLastServer did not return correctly") + } + + continue + } + } + + // Should report as last server + if !pool.IsLastServer() { + t.Error("Address pool IsLastServer did not return correctly") + } +} + +func TestAddressPoolNextServer(t *testing.T) { + // Test that IsLastServer works correctly + pool := NewAddressPool([]string{"google.com:1234", "google.com:1234"}) + + cnt := 0 + for i := 0; i < 42; i++ { + addr, err := pool.NextServer() + + // Should succeed + if err != nil { + t.Error("Address pool did not parse IP correctly") + } else if addr != "google.com:1234" { + t.Error("Address pool returned incorrect address: ", addr) + } + + cnt++ + + // Break on last server + if pool.IsLastServer() { + break + } + } + + // Should have stopped at 2 servers + if cnt != 2 { + t.Error("Address pool NextServer failed") + } +} diff --git a/src/lc-lib/transports/zmq.go b/src/lc-lib/transports/zmq.go index 0d61ebf5..2426a2b0 100644 --- a/src/lc-lib/transports/zmq.go +++ b/src/lc-lib/transports/zmq.go @@ -304,21 +304,21 @@ func (t *TransportZmq) Init() (err error) { } for { - addressport, desc, err := pool.Next() + addressport, err := pool.NextServer() if err != nil { return err } - if err = t.dealer.Connect("tcp://" + addressport.String()); err != nil { - log.Warning("Failed to register %s with ZMQ, skipping", desc) + if err = t.dealer.Connect("tcp://" + addressport); err != nil { + log.Warning("Failed to register %s with ZMQ, skipping", addressport) goto NextAddress } - log.Info("Registered %s with ZMQ", desc) + log.Info("Registered %s with ZMQ", addressport) endpoints++ NextAddress: - if pool.IsLast() { + if pool.IsLastServer() { break } } From cd3c566fb08e0c52b4de8a4223acc6b2890da1f6 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 13:14:39 +0000 Subject: [PATCH 52/75] Add a quick stdin registrar test --- src/log-courier/stdin_registrar.go | 21 ++++--- src/log-courier/stdin_registrar_test.go | 78 +++++++++++++++++++++++++ 2 files changed, 88 insertions(+), 11 deletions(-) create mode 100644 src/log-courier/stdin_registrar_test.go diff --git a/src/log-courier/stdin_registrar.go b/src/log-courier/stdin_registrar.go index 50926c43..cddfc71a 100644 --- a/src/log-courier/stdin_registrar.go +++ b/src/log-courier/stdin_registrar.go @@ -34,6 +34,8 @@ type StdinRegistrar struct { registrar_chan chan []registrar.EventProcessor signal_chan chan int64 references int + wait_offset *int64 + last_offset int64 } func newStdinRegistrar(pipeline *core.Pipeline) *StdinRegistrar { @@ -55,9 +57,6 @@ func (r *StdinRegistrar) Run() { r.group.Done() }() - var wait_offset *int64 - var last_offset int64 - state := make(map[core.Stream]*registrar.FileState) state[nil] = ®istrar.FileState{} @@ -65,25 +64,25 @@ RegistrarLoop: for { select { case signal := <-r.signal_chan: - if last_offset == signal { + r.wait_offset = new(int64) + *r.wait_offset = signal + + if r.last_offset == signal { break RegistrarLoop } - wait_offset = new(int64) - *wait_offset = signal - - log.Debug("Stdin registrar received stdin EOF offset of %d", *wait_offset) + log.Debug("Stdin registrar received stdin EOF offset of %d", *r.wait_offset) case events := <-r.registrar_chan: for _, event := range events { event.Process(state) } - if wait_offset != nil && state[nil].Offset >= *wait_offset { + r.last_offset = state[nil].Offset + + if r.wait_offset != nil && state[nil].Offset >= *r.wait_offset { log.Debug("Stdin registrar has reached end of stdin") break RegistrarLoop } - - last_offset = state[nil].Offset case <-r.OnShutdown(): break RegistrarLoop } diff --git a/src/log-courier/stdin_registrar_test.go b/src/log-courier/stdin_registrar_test.go new file mode 100644 index 00000000..a3aa0e45 --- /dev/null +++ b/src/log-courier/stdin_registrar_test.go @@ -0,0 +1,78 @@ +/* + * Copyright 2014 Jason Woods. + * + * This file is a modification of code from Logstash Forwarder. + * Copyright 2012-2013 Jordan Sissel and contributors. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package main + +import ( + "github.com/driskell/log-courier/src/lc-lib/core" + "github.com/driskell/log-courier/src/lc-lib/registrar" + "testing" + "time" +) + +func newTestStdinRegistrar() (*core.Pipeline, *StdinRegistrar) { + pipeline := core.NewPipeline() + return pipeline, newStdinRegistrar(pipeline) +} + +func newEventSpool(offset int64) []*core.EventDescriptor { + // Prepare an event spool with single event of specified offset + return []*core.EventDescriptor{ + &core.EventDescriptor{ + Stream: nil, + Offset: offset, + Event: []byte{}, + }, + } +} + +func TestStdinRegistrarWait(t *testing.T) { + p, r := newTestStdinRegistrar() + + // Start the stdin registrar + go func() { + r.Run() + }() + + c := r.Connect() + c.Add(registrar.NewAckEvent(newEventSpool(13))) + c.Send() + + r.Wait(13) + + wait := make(chan int) + go func() { + p.Wait() + wait <- 1 + }() + + select { + case <-wait: + break + case <-time.After(5 * time.Second): + t.Error("Timeout waiting for stdin registrar shutdown") + return + } + + if r.last_offset != 13 { + t.Error("Last offset was incorrect: ", r.last_offset) + } else if r.wait_offset == nil || *r.wait_offset != 13 { + t.Error("Wait offset was incorrect: ", r.wait_offset) + } +} From 470a0ff4324be9fe37b85413f623f4001568fdc4 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 13:37:43 +0000 Subject: [PATCH 53/75] Add codec tests for Teardown result --- src/lc-lib/codecs/filter_test.go | 42 ++++++++++++++++------------- src/lc-lib/codecs/multiline_test.go | 40 ++++++++++++++++++++++----- 2 files changed, 58 insertions(+), 24 deletions(-) diff --git a/src/lc-lib/codecs/filter_test.go b/src/lc-lib/codecs/filter_test.go index 2b450227..3c1d0c19 100644 --- a/src/lc-lib/codecs/filter_test.go +++ b/src/lc-lib/codecs/filter_test.go @@ -38,11 +38,14 @@ func TestFilter(t *testing.T) { codec.Event(6, 7, "DEBUG Next line") if len(filter_lines) != 1 { - t.Logf("Wrong line count received") - t.FailNow() + t.Error("Wrong line count received") } else if filter_lines[0] != "NEXT line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() + t.Error("Wrong line[0] received: %s", filter_lines[0]) + } + + offset := codec.Teardown() + if offset != 7 { + t.Error("Teardown returned incorrect offset: ", offset) } } @@ -61,17 +64,18 @@ func TestFilterNegate(t *testing.T) { codec.Event(6, 7, "DEBUG Next line") if len(filter_lines) != 3 { - t.Logf("Wrong line count received") - t.FailNow() + t.Error("Wrong line count received") } else if filter_lines[0] != "DEBUG First line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() + t.Error("Wrong line[0] received: %s", filter_lines[0]) } else if filter_lines[1] != "ANOTHER line" { - t.Logf("Wrong line[1] received: %s", filter_lines[1]) - t.FailNow() + t.Error("Wrong line[1] received: %s", filter_lines[1]) } else if filter_lines[2] != "DEBUG Next line" { - t.Logf("Wrong line[2] received: %s", filter_lines[2]) - t.FailNow() + t.Error("Wrong line[2] received: %s", filter_lines[2]) + } + + offset := codec.Teardown() + if offset != 7 { + t.Error("Teardown returned incorrect offset: ", offset) } } @@ -90,13 +94,15 @@ func TestFilterMultiple(t *testing.T) { codec.Event(6, 7, "DEBUG Next line") if len(filter_lines) != 2 { - t.Logf("Wrong line count received") - t.FailNow() + t.Error("Wrong line count received") } else if filter_lines[0] != "DEBUG First line" { - t.Logf("Wrong line[0] received: %s", filter_lines[0]) - t.FailNow() + t.Error("Wrong line[0] received: %s", filter_lines[0]) } else if filter_lines[1] != "NEXT line" { - t.Logf("Wrong line[1] received: %s", filter_lines[1]) - t.FailNow() + t.Error("Wrong line[1] received: %s", filter_lines[1]) + } + + offset := codec.Teardown() + if offset != 7 { + t.Error("Teardown returned incorrect offset: ", offset) } } diff --git a/src/lc-lib/codecs/multiline_test.go b/src/lc-lib/codecs/multiline_test.go index 87d51188..3ca7ba51 100644 --- a/src/lc-lib/codecs/multiline_test.go +++ b/src/lc-lib/codecs/multiline_test.go @@ -85,6 +85,11 @@ func TestMultilinePrevious(t *testing.T) { t.Logf("Wrong line count received") t.FailNow() } + + offset := codec.Teardown() + if offset != 5 { + t.Error("Teardown returned incorrect offset: ", offset) + } } func TestMultilinePreviousNegate(t *testing.T) { @@ -107,6 +112,11 @@ func TestMultilinePreviousNegate(t *testing.T) { t.Logf("Wrong line count received") t.FailNow() } + + offset := codec.Teardown() + if offset != 5 { + t.Error("Teardown returned incorrect offset: ", offset) + } } func TestMultilinePreviousTimeout(t *testing.T) { @@ -117,7 +127,7 @@ func TestMultilinePreviousTimeout(t *testing.T) { "pattern": "^(ANOTHER|NEXT) ", "what": "previous", "negate": false, - "previous timeout": "5s", + "previous timeout": "3s", }, checkMultiline, t) // Send some data @@ -126,8 +136,8 @@ func TestMultilinePreviousTimeout(t *testing.T) { codec.Event(4, 5, "ANOTHER line") codec.Event(6, 7, "DEBUG Next line") - // Allow 3 seconds - time.Sleep(3 * time.Second) + // Allow a second + time.Sleep(time.Second) multiline_lock.Lock() if multiline_lines != 1 { @@ -136,8 +146,8 @@ func TestMultilinePreviousTimeout(t *testing.T) { } multiline_lock.Unlock() - // Allow 7 seconds - time.Sleep(7 * time.Second) + // Allow 5 seconds + time.Sleep(5 * time.Second) multiline_lock.Lock() if multiline_lines != 2 { @@ -146,7 +156,10 @@ func TestMultilinePreviousTimeout(t *testing.T) { } multiline_lock.Unlock() - codec.Teardown() + offset := codec.Teardown() + if offset != 7 { + t.Error("Teardown returned incorrect offset: ", offset) + } } func TestMultilineNext(t *testing.T) { @@ -169,6 +182,11 @@ func TestMultilineNext(t *testing.T) { t.Logf("Wrong line count received") t.FailNow() } + + offset := codec.Teardown() + if offset != 5 { + t.Error("Teardown returned incorrect offset: ", offset) + } } func TestMultilineNextNegate(t *testing.T) { @@ -191,6 +209,11 @@ func TestMultilineNextNegate(t *testing.T) { t.Logf("Wrong line count received") t.FailNow() } + + offset := codec.Teardown() + if offset != 5 { + t.Error("Teardown returned incorrect offset: ", offset) + } } func checkMultilineMaxBytes(start_offset int64, end_offset int64, text string) { @@ -231,4 +254,9 @@ func TestMultilineMaxBytes(t *testing.T) { t.Logf("Wrong line count received") t.FailNow() } + + offset := codec.Teardown() + if offset != 5 { + t.Error("Teardown returned incorrect offset: ", offset) + } } From 004d08fb2988bbc2078c22bb0bcc6984bad42066 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 13:50:49 +0000 Subject: [PATCH 54/75] Allow for IPv6 resolution during tests --- src/lc-lib/transports/address_pool_test.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/lc-lib/transports/address_pool_test.go b/src/lc-lib/transports/address_pool_test.go index 68e61a37..91126a6b 100644 --- a/src/lc-lib/transports/address_pool_test.go +++ b/src/lc-lib/transports/address_pool_test.go @@ -48,9 +48,9 @@ func TestAddressPoolHost(t *testing.T) { t.Error("Address pool did not parse Host correctly: ", err) } else if addr == nil { t.Error("Address pool did not returned nil addr") - } else if desc != "8.8.8.8:555 (google-public-dns-a.google.com)" { + } else if desc != "8.8.8.8:555 (google-public-dns-a.google.com)" && desc != "[2001:4860:4860::8888]:555 (google-public-dns-a.google.com)" { t.Error("Address pool did not return correct desc: ", desc) - } else if addr.String() != "8.8.8.8:555" { + } else if addr.String() != "8.8.8.8:555" && addr.String() != "[2001:4860:4860::8888]:555" { t.Error("Address pool did not return correct addr: ", addr.String()) } } From 781478c8c11ba9ae63762c4e0a28625b4bbc9a11 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 16:12:54 +0000 Subject: [PATCH 55/75] Fix tests by switching to outlook.com for IsLast test --- src/lc-lib/transports/address_pool_test.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lc-lib/transports/address_pool_test.go b/src/lc-lib/transports/address_pool_test.go index 91126a6b..5f8fcebc 100644 --- a/src/lc-lib/transports/address_pool_test.go +++ b/src/lc-lib/transports/address_pool_test.go @@ -134,7 +134,7 @@ func TestAddressPoolHostFailure(t *testing.T) { func TestAddressPoolIsLast(t *testing.T) { // Test that IsLastServer works correctly - pool := NewAddressPool([]string{"google.com:1234"}) + pool := NewAddressPool([]string{"outlook.com:1234"}) // Should report as last if !pool.IsLast() { From b52c5df05b23f5696fb7bcaa9f98e73409b0bb9c Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 16:36:59 +0000 Subject: [PATCH 56/75] Update readme with repository instructions Update wording for ZeroMQ --- README.md | 62 +++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 46 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index c2d43670..69db9cd8 100644 --- a/README.md +++ b/README.md @@ -12,10 +12,10 @@ with many fixes and behavioural improvements. - [Features](#features) - [Installation](#installation) - - [Requirements](#requirements) - - [Building](#building) + - [Public Repositories](#public-repositories) + - [From Source](#from-source) - [Logstash Integration](#logstash-integration) - - [Building with ZMQ support](#building-with-zmq-support) + - [ZeroMQ support](#zeromq-support) - [Generating Certificates and Keys](#generating-certificates-and-keys) - [Documentation](#documentation) @@ -48,28 +48,58 @@ plugin ## Installation -### Requirements +### Public Repositories -1. \*nix, OS X or Windows +**RPM** + +The author maintains a **COPR** repository with RedHat/CentOS compatible RPMs +that may be installed using `yum`. This repository depends on the widely used +**EPEL** repository for dependencies. + +The **EPEL** repository can be installed automatically on CentOS distributions +by running `yum install epel-release`. Otherwise, you may follow the +instructions on the [EPEL homepage](https://fedoraproject.org/wiki/EPEL). + +To install the Log Courier repository, download the corresponding `.repo` +configuration file below, and place it in `/etc/yum.repos.d`. Log Courier may +then be installed using `yum install log-courier`. + +***CentOS/RedHat 6.x***: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) +***CentOS/RedHat 7.x***: +[driskell-log-courier-epel-7.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-7.repo) + +***NOTE:*** The RPM packages versions of Log Courier do not currently support +the `zmq` encrypted transport. They do support the `plainzmq` transport. + +**DEB** + +A Debian/Ubuntu compatible PPA repository is under consideration. At the moment, +no such repository exists. + +### From Source + +You will need the following: + +1. Linux, Unix, OS X or Windows 1. The [golang](http://golang.org/doc/install) compiler tools (1.2-1.4) 1. [git](http://git-scm.com) 1. GNU make -***\*nix:*** *Most requirements can usually be installed by your favourite package +***Linux/Unix:*** *Most requirements can usually be installed by your favourite package manager.* ***OS X:*** *Git and GNU make are provided automatically by XCode.* ***Windows:*** *GNU make for Windows can be found [here](http://gnuwin32.sourceforge.net/packages/make.htm).* -### Building - -To build, simply run `make` as follows. +To build the binaries, simply run `make` as follows. git clone https://github.com/driskell/log-courier cd log-courier make -The log-courier program can then be found in the 'bin' folder. +The log-courier program can then be found in the 'bin' folder. This can be +manually installed anywhere on your system. Startup scripts for various +platforms can be found in the [contrib/initscripts](contrib/initscripts) folder. *Note: If you receive errors whilst running `make`, try `gmake` instead.* @@ -87,14 +117,14 @@ Install using the Logstash 1.5+ Plugin manager. Detailed instructions, including integration with Logstash 1.4.x, can be found on the [Logstash Integration](docs/LogstashIntegration.md) page. -### Building with ZMQ support +### ZeroMQ support -To use the 'plainzmq' and 'zmq' transports, you will need to install -[ZeroMQ](http://zeromq.org/intro:get-the-software) (>=3.2 for cleartext -'plainzmq', >=4.0 for encrypted 'zmq'). +To use the 'plainzmq' or 'zmq' transports, you will need to install +[ZeroMQ](http://zeromq.org/intro:get-the-software) (>=3.2 for 'plainzmq', >=4.0 +for 'zmq' which supports encryption). -***\*nix:*** *ZeroMQ >=3.2 is usually available via the package manager. ZeroMQ >=4.0 -may need to be built and installed manually.* +***Linux\Unix:*** *ZeroMQ >=3.2 is usually available via the package manager. +ZeroMQ >=4.0 may need to be built and installed manually.* ***OS X:*** *ZeroMQ can be installed via [Homebrew](http://brew.sh).* ***Windows:*** *ZeroMQ will need to be built and installed manually.* From eecbba3a5e9c1713229fe0df89f7d4077486eb51 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 17:18:09 +0000 Subject: [PATCH 57/75] Cleanup README.md --- README.md | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/README.md b/README.md index 69db9cd8..415a2e06 100644 --- a/README.md +++ b/README.md @@ -12,11 +12,13 @@ with many fixes and behavioural improvements. - [Features](#features) - [Installation](#installation) - - [Public Repositories](#public-repositories) - - [From Source](#from-source) - - [Logstash Integration](#logstash-integration) - - [ZeroMQ support](#zeromq-support) - - [Generating Certificates and Keys](#generating-certificates-and-keys) +- [Public Repositories](#public-repositories) + - [RPM](#rpm) + - [DEB](#deb) +- [Building From Source](#building-from-source) +- [Logstash Integration](#logstash-integration) +- [ZeroMQ support](#zeromq-support) +- [Generating Certificates and Keys](#generating-certificates-and-keys) - [Documentation](#documentation) @@ -46,11 +48,9 @@ shipping speed and status * [Logstash Integration](docs/LogstashIntegration.md) with an input and output plugin -## Installation +## Public Repositories -### Public Repositories - -**RPM** +### RPM The author maintains a **COPR** repository with RedHat/CentOS compatible RPMs that may be installed using `yum`. This repository depends on the widely used @@ -64,19 +64,20 @@ To install the Log Courier repository, download the corresponding `.repo` configuration file below, and place it in `/etc/yum.repos.d`. Log Courier may then be installed using `yum install log-courier`. -***CentOS/RedHat 6.x***: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) -***CentOS/RedHat 7.x***: +* ***CentOS/RedHat 6.x***: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) +* ***CentOS/RedHat 7.x***: [driskell-log-courier-epel-7.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-7.repo) ***NOTE:*** The RPM packages versions of Log Courier do not currently support the `zmq` encrypted transport. They do support the `plainzmq` transport. -**DEB** +### DEB A Debian/Ubuntu compatible PPA repository is under consideration. At the moment, -no such repository exists. +no such repository exists. PRs and geberal advice are all welcome and will be +appreciated. -### From Source +## Building From Source You will need the following: @@ -103,7 +104,7 @@ platforms can be found in the [contrib/initscripts](contrib/initscripts) folder. *Note: If you receive errors whilst running `make`, try `gmake` instead.* -### Logstash Integration +## Logstash Integration Log Courier does not utilise the lumberjack Logstash plugin and instead uses its own custom plugin. This allows significant enhancements to the integration far @@ -117,7 +118,7 @@ Install using the Logstash 1.5+ Plugin manager. Detailed instructions, including integration with Logstash 1.4.x, can be found on the [Logstash Integration](docs/LogstashIntegration.md) page. -### ZeroMQ support +## ZeroMQ support To use the 'plainzmq' or 'zmq' transports, you will need to install [ZeroMQ](http://zeromq.org/intro:get-the-software) (>=3.2 for 'plainzmq', >=4.0 @@ -143,7 +144,7 @@ the Log Courier hosts are of the same major version. A Log Courier host that has ZeroMQ 4.0.5 will not work with a Logstash host using ZeroMQ 3.2.4 (but will work with a Logstash host using ZeroMQ 4.0.4.)** -### Generating Certificates and Keys +## Generating Certificates and Keys Running `make selfsigned` will automatically build and run the `lc-tlscert` utility that can quickly and easily generate a self-signed certificate for the From cb124e3e9105a1e31a9626201e178404e1703582 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Fri, 27 Feb 2015 17:26:06 +0000 Subject: [PATCH 58/75] Update README.md Brevity Tweaks --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 415a2e06..92ce0498 100644 --- a/README.md +++ b/README.md @@ -64,18 +64,18 @@ To install the Log Courier repository, download the corresponding `.repo` configuration file below, and place it in `/etc/yum.repos.d`. Log Courier may then be installed using `yum install log-courier`. -* ***CentOS/RedHat 6.x***: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) -* ***CentOS/RedHat 7.x***: +* **CentOS/RedHat 6.x**: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) +* **CentOS/RedHat 7.x**: [driskell-log-courier-epel-7.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-7.repo) -***NOTE:*** The RPM packages versions of Log Courier do not currently support -the `zmq` encrypted transport. They do support the `plainzmq` transport. +***NOTE:*** *The RPM packages versions of Log Courier are built using ZeroMQ 3.2 and +therefore do not support the encrypted `zmq` transport. They do support the +unencrypted `plainzmq` transport.* ### DEB -A Debian/Ubuntu compatible PPA repository is under consideration. At the moment, -no such repository exists. PRs and geberal advice are all welcome and will be -appreciated. +A Debian/Ubuntu compatible **PPA** repository is under consideration. At the moment, +no such repository exists. ## Building From Source From ddc7966042021c177f07c358552eac2a0dac34da Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 09:42:26 +0000 Subject: [PATCH 59/75] Use shields.io badges --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 92ce0498..1c61e852 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Log Courier [![Build Status](https://travis-ci.org/driskell/log-courier.svg?branch=develop)](https://travis-ci.org/driskell/log-courier) +# Log Courier [![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) [![Latest Release](https://img.shields.io/github/release/driskell/log-courier.svg)](https://github.com/driskell/log-courier/releases/latest) [![Gem Version](https://img.shields.io/gem/v/log-courier.svg)](https://rubygems.org/gems/log-courier) Log Courier is a tool created to ship log files speedily and securely to remote [Logstash](http://logstash.net) instances for processing whilst using From 64b59b3a2a94a2f5d5b66560cfc43c4805fd181b Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 09:53:56 +0000 Subject: [PATCH 60/75] Strip broken badges --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 1c61e852..9706d36d 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,6 @@ -# Log Courier [![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) [![Latest Release](https://img.shields.io/github/release/driskell/log-courier.svg)](https://github.com/driskell/log-courier/releases/latest) [![Gem Version](https://img.shields.io/gem/v/log-courier.svg)](https://rubygems.org/gems/log-courier) +# Log Courier + +[![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) Log Courier is a tool created to ship log files speedily and securely to remote [Logstash](http://logstash.net) instances for processing whilst using From 2ad548fd64e8eb9477f5e67915d4fec7dddbf6b0 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 11:51:03 +0000 Subject: [PATCH 61/75] Output plugin TLC Implement clean shutdown (Fixes random gem test failure) Implement extra debug logging Fix crash if a logger is not provided in the options --- lib/log-courier/client.rb | 307 ++++++++++++++++++++++++-------------- 1 file changed, 191 insertions(+), 116 deletions(-) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index 205368b8..6876daa1 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -122,6 +122,16 @@ def initialize(options = {}) run_spooler end + # TODO: Make these configurable? + @keepalive_timeout = 1800 + @network_timeout = 30 + + # TODO: Make pending payload max configurable? + @max_pending_payloads = 100 + + @retry_payload = nil + @received_payloads = Queue.new + @pending_ping = false # Start the IO thread @@ -137,11 +147,17 @@ def publish(event) return end - def shutdown - # Raise a shutdown signal in the spooler and wait for it - @spooler_thread.raise ShutdownSignal - @io_thread.raise ShutdownSignal - @spooler_thread.join + def shutdown(force=false) + if force + # Raise a shutdown signal in the spooler and wait for it + @spooler_thread.raise ShutdownSignal + @spooler_thread.join + @io_thread.raise ShutdownSignal + else + @event_queue.push nil + @spooler_thread.join + @io_control << ['!', nil] + end @io_thread.join return @pending_payloads.length == 0 end @@ -157,7 +173,13 @@ def run_spooler begin loop do event = @event_queue.pop next_flush - Time.now.to_i + + if event.nil? + raise ShutdownSignal + end + spooled.push(event) + break if spooled.length >= @options[:spool_size] end rescue TimeoutError @@ -166,17 +188,18 @@ def run_spooler end if spooled.length >= @options[:spool_size] - @logger.debug 'Flushing full spool', :events => spooled.length + @logger.debug 'Flushing full spool', :events => spooled.length unless @logger.nil? else - @logger.debug 'Flushing spool due to timeout', :events => spooled.length + @logger.debug 'Flushing spool due to timeout', :events => spooled.length unless @logger.nil? end # Pass through to io_control but only if we're ready to send @send_mutex.synchronize do - @send_cond.wait(@send_mutex) unless @send_ready + @send_cond.wait(@send_mutex) until @send_ready @send_ready = false - @io_control << ['E', spooled] end + + @io_control << ['E', spooled] end return rescue ShutdownSignal @@ -184,118 +207,17 @@ def run_spooler end def run_io - # TODO: Make keepalive configurable? - @keepalive_timeout = 1800 - - # TODO: Make pending payload max configurable? - max_pending_payloads = 100 - - retry_payload = nil - - can_send = true - loop do # Reconnect loop @client.connect @io_control - reset_keepalive + @timeout = Time.now.to_i + @keepalive_timeout - # Capture send exceptions - begin - # IO loop - loop do - catch :keepalive do - begin - action = @io_control.pop @keepalive_next - Time.now.to_i - - # Process the action - case action[0] - when 'S' - # If we're flushing through the pending, pick from there - unless retry_payload.nil? - # Regenerate data if we need to - retry_payload.data = buffer_jdat_data(retry_payload.events, retry_payload.nonce) if retry_payload.data == nil - - # Send and move onto next - @client.send 'JDAT', retry_payload.data - - retry_payload = retry_payload.next - throw :keepalive - end - - # Ready to send, allow spooler to pass us something - @send_mutex.synchronize do - @send_ready = true - @send_cond.signal - end - - can_send = true - when 'E' - # If we have too many pending payloads, pause the IO - if @pending_payloads.length + 1 >= max_pending_payloads - @client.pause_send - end - - # Received some events - send them - send_jdat action[1] - - # The send action will trigger another "S" if we have more send buffer - can_send = false - when 'R' - # Received a message - signature, message = action[1..2] - case signature - when 'PONG' - process_pong message - when 'ACKN' - process_ackn message - else - # Unknown message - only listener is allowed to respond with a "????" message - # TODO: What should we do? Just ignore for now and let timeouts conquer - end - when 'F' - # Reconnect, an error occurred - break - end - rescue TimeoutError - # Keepalive timeout hit, send a PING unless we were awaiting a PONG - if @pending_ping - # Timed out, break into reconnect - fail TimeoutError - end - - # Is send full? can_send will be false if so - # We should've started receiving ACK by now so time out - fail TimeoutError unless can_send - - # Send PING - send_ping - - # We may have filled send buffer - can_send = false - end - end - - # Reset keepalive timeout - reset_keepalive - end - rescue ProtocolError => e - # Reconnect required due to a protocol error - @logger.warn 'Protocol error', :error => e.message unless @logger.nil? - rescue TimeoutError - # Reconnect due to timeout - @logger.warn 'Timeout occurred' unless @logger.nil? - rescue ShutdownSignal - # Shutdown, break out - break - rescue StandardError, NativeException => e - # Unknown error occurred - @logger.warn e, :hint => 'Unknown error' unless @logger.nil? - end + run_io_loop # Disconnect and retry payloads @client.disconnect - retry_payload = @first_payload + @retry_payload = @first_payload # TODO: Make reconnect time configurable? sleep 5 @@ -303,11 +225,164 @@ def run_io @client.disconnect return + rescue ShutdownSignal + # Ensure disconnected + @client.disconnect end - def reset_keepalive - @keepalive_next = Time.now.to_i + @keepalive_timeout - return + def run_io_loop() + io_stop = false + can_send = false + + # IO loop + loop do + begin + action = @io_control.pop @timeout - Time.now.to_i + + # Process the action + case action[0] + when 'S' + # If we're flushing through the pending, pick from there + unless @retry_payload.nil? + @logger.debug 'Send is ready, retrying previous payload' unless @logger.nil? + + # Regenerate data if we need to + @retry_payload.data = buffer_jdat_data(@retry_payload.events, @retry_payload.nonce) if @retry_payload.data == nil + + # Send and move onto next + @client.send 'JDAT', @retry_payload.data + + @retry_payload = @retry_payload.next + + # If first send, exit idle mode + if @retry_payload == @first_payload + @timeout = Time.now.to_i + @network_timeout + end + break + end + + # Ready to send, allow spooler to pass us something if we don't + # have something already + if @received_payloads.length != 0 + @logger.debug 'Send is ready, using events from backlog' unless @logger.nil? + send_payload @received_payloads.pop() + else + @logger.debug 'Send is ready, requesting events' unless @logger.nil? + + can_send = true + + @send_mutex.synchronize do + @send_ready = true + @send_cond.signal + end + end + when 'E' + # Were we expecting a payload? Store it if not + if can_send + @logger.debug 'Sending events', :events => action[1].length unless @logger.nil? + send_payload action[1] + can_send = false + else + @logger.debug 'Events received when not ready; saved to backlog' unless @logger.nil? + @received_payloads.push action[1] + end + when 'R' + # Received a message + signature, message = action[1..2] + case signature + when 'PONG' + process_pong message + when 'ACKN' + process_ackn message + else + # Unknown message - only listener is allowed to respond with a "????" message + # TODO: What should we do? Just ignore for now and let timeouts conquer + end + + # Any pending payloads left? + if @pending_payloads.length == 0 + # Handle shutdown + if io_stop + raise ShutdownSignal + end + + # Enter idle mode + @timeout = Time.now.to_i + @keepalive_timeout + else + # Set network timeout + @timeout = Time.now.to_i + @network_timeout + end + when 'F' + # Reconnect, an error occurred + break + when '!' + @logger.debug 'Shutdown request received' unless @logger.nil? + + # Shutdown request received + if @pending_payloads.length == 0 + raise ShutdownSignal + end + + @logger.debug 'Delaying shutdown due to pending payloads', :payloads => @pending_payloads.length unless @logger.nil? + + io_stop = true + + # Stop spooler sending + can_send = false + @send_mutex.synchronize do + @send_ready = false + end + end + rescue TimeoutError + if @pending_payloads != 0 + # Network timeout + fail TimeoutError + end + + # Keepalive timeout hit, send a PING unless we were awaiting a PONG + if @pending_ping + # Timed out, break into reconnect + fail TimeoutError + end + + # Stop spooler sending + can_send = false + @send_mutex.synchronize do + @send_ready = false + end + + # Send PING + send_ping + + @timeout = Time.now.to_i + @network_timeout + end + end + rescue ProtocolError => e + # Reconnect required due to a protocol error + @logger.warn 'Protocol error', :error => e.message unless @logger.nil? + rescue TimeoutError + # Reconnect due to timeout + @logger.warn 'Timeout occurred' unless @logger.nil? + rescue ShutdownSignal => e + fail e + rescue StandardError, NativeException => e + # Unknown error occurred + @logger.warn e, :hint => 'Unknown error' unless @logger.nil? + end + + def send_payload(payload) + # If we have too many pending payloads, pause the IO + if @pending_payloads.length + 1 >= @max_pending_payloads + @client.pause_send + end + + # Received some events - send them + send_jdat payload + + # Leave idle mode if this is the first payload after idle + if @pending_payloads.length == 1 + @timeout = Time.now.to_i + @network_timeout + end end def generate_nonce From 6d2cc2b69260edbc8e9ec734a090f636913b0189 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 11:58:56 +0000 Subject: [PATCH 62/75] Revert from 'addresses' back to 'hosts' - this is to limit breaking changes --- lib/logstash/outputs/courier.rb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/logstash/outputs/courier.rb b/lib/logstash/outputs/courier.rb index 51984dc5..a87b1d2f 100644 --- a/lib/logstash/outputs/courier.rb +++ b/lib/logstash/outputs/courier.rb @@ -25,7 +25,7 @@ class Courier < LogStash::Outputs::Base milestone 1 # The list of addresses Log Courier should send to - config :addresses, :validate => :array, :required => true + config :hosts, :validate => :array, :required => true # The port to connect to config :port, :validate => :number, :required => true @@ -55,7 +55,7 @@ def register options = { logger: @logger, - addresses: @addresses, + addresses: @hosts, port: @port, ssl_ca: @ssl_ca, ssl_certificate: @ssl_certificate, From 4dc87798afeccce648c278c4a5436e08c715040d Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 12:00:13 +0000 Subject: [PATCH 63/75] Report an error if 'hosts' option contains more than 1 address, as only 1 is supported at this time --- lib/log-courier/client.rb | 7 ++++++- lib/log-courier/client_tcp.rb | 4 ---- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/lib/log-courier/client.rb b/lib/log-courier/client.rb index 6876daa1..16740c88 100644 --- a/lib/log-courier/client.rb +++ b/lib/log-courier/client.rb @@ -96,6 +96,8 @@ def initialize(options = {}) transport: 'tls', spool_size: 1024, idle_timeout: 5, + port: nil, + addresses: [], }.merge!(options) @logger = @options[:logger] @@ -109,6 +111,9 @@ def initialize(options = {}) fail 'output/courier: \'transport\' must be tcp or tls' end + fail 'output/courier: \'addresses\' must contain at least one address' if @options[:addresses].empty? + fail 'output/courier: \'addresses\' only supports a single address at this time' if @options[:addresses].length > 1 + @event_queue = EventQueue.new @options[:spool_size] @pending_payloads = {} @first_payload = nil @@ -364,7 +369,7 @@ def run_io_loop() # Reconnect due to timeout @logger.warn 'Timeout occurred' unless @logger.nil? rescue ShutdownSignal => e - fail e + raise rescue StandardError, NativeException => e # Unknown error occurred @logger.warn e, :hint => 'Unknown error' unless @logger.nil? diff --git a/lib/log-courier/client_tcp.rb b/lib/log-courier/client_tcp.rb index 793d92b3..5a5860f8 100644 --- a/lib/log-courier/client_tcp.rb +++ b/lib/log-courier/client_tcp.rb @@ -28,8 +28,6 @@ def initialize(options = {}) @options = { logger: nil, transport: 'tls', - port: nil, - addresses: [], ssl_ca: nil, ssl_certificate: nil, ssl_key: nil, @@ -42,8 +40,6 @@ def initialize(options = {}) fail "output/courier: '#{k}' is required" if @options[k].nil? end - fail 'output/courier: \'addresses\' must contain at least one address' if @options[:addresses].empty? - if @options[:transport] == 'tls' c = 0 [:ssl_certificate, :ssl_key].each do From 14b7dd7f13ef1b8c64ca8400b7a77646b8d4d978 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 13:29:00 +0000 Subject: [PATCH 64/75] Fix the last and final dead time communication race --- src/lc-lib/harvester/harvester.go | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/src/lc-lib/harvester/harvester.go b/src/lc-lib/harvester/harvester.go index 0bb03723..c5c3a58d 100644 --- a/src/lc-lib/harvester/harvester.go +++ b/src/lc-lib/harvester/harvester.go @@ -242,23 +242,27 @@ ReadLoop: return h.codec.Teardown(), err } - // Store latest stat() - h.fileinfo = info - if info.Size() < h.offset { log.Warning("Unexpected file truncation, seeking to beginning: %s", h.path) h.file.Seek(0, os.SEEK_SET) h.offset = 0 // TODO: How does this impact a partial line reader buffer? - // TODO: How does this imapct multiline? + // TODO: How does this impact multiline? continue } - if age := time.Since(last_read_time); age > h.stream_config.DeadTime { - // if last_read_time was more than dead time, this file is probably dead. Stop watching it. + // If last_read_time was more than dead time, this file is probably dead. + // Stop only if the mtime did not change since last check - this stops a + // race where we hit EOF but as we Stat() the mtime is updated - this mtime + // is the one we monitor in order to resume checking, so we need to check it + // didn't already update + if age := time.Since(last_read_time); age > h.stream_config.DeadTime && h.fileinfo.ModTime() == info.ModTime() { log.Info("Stopping harvest of %s; last change was %v ago", h.path, age-(age%time.Second)) return h.codec.Teardown(), nil } + + // Store latest stat() + h.fileinfo = info } log.Info("Harvester for %s exiting", h.path) From 4571fcfe2a882713747161afcbb3bad130594cc5 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 13:29:26 +0000 Subject: [PATCH 65/75] Update change log for impending 1.5 release --- docs/ChangeLog.md | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index 476e59d5..c6cc13e9 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -4,7 +4,7 @@ **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* -- [1.4](#14) +- [1.5](#15) - [1.3](#13) - [1.2](#12) - [1.1](#11) @@ -20,9 +20,9 @@ -## 1.4 +## 1.5 -*???* +*28th February 2015* ***Breaking Changes*** @@ -33,6 +33,9 @@ argument, and configure the codec and additional fields in the new `stdin` configuration file section. Log Courier will now also exit cleanly once all data from stdin has been read and acknowledged by the server (previously it would hang forever.) +* The output plugin will fail startup if more than a single `host` address is +provided. Previous versions would simply ignore additional hosts and cause +potential confusion. ***Changes*** @@ -43,7 +46,7 @@ failures will still round robin. configuration. (Thanks @mhughes - #88) * A configuration reload will now reopen log files. (#91) * Implement support for SRV record server entries (#85) -* Fix Log Courier output plugin (#96) +* Fix Log Courier output plugin (#96 #98) * Fix Logstash input plugin with zmq transport failing when discarding a message due to peer_recv_queue being exceeded (#92) * Fix a TCP transport race condition that could deadlock publisher on a send() @@ -58,6 +61,17 @@ support it. Also, using this option enables it globally within Logstash due to option leakage within the JrJackson gem (#103) * Fix filter codec not saving offset correctly when dead time reached or stdin EOF reached (reported in #108) +* Fix Logstash input plugin crash if the fields configuration for Log Courier +specifies a "tags" field that is not an Array, and the input configuration for +Logstash also specified tags (#118) +* Fix a registrar conflict bug that can occur if a followed log file becomes +inaccessible to Log Courier (#122) +* Fix inaccessible log files causing errors to be reported to the Log Courier +log target every 10 seconds. Only a single error should be reported (#119) +* Fix unknown plugin error in Logstash input plugin if a connection fails to +accept (#118) +* Fix Logstash input plugin crash with plainzmq and zmq transports when the +listen address is already in use (Thanks to @mheese - #112) ***Security*** @@ -66,7 +80,7 @@ courier plugins to further enhance security when using the TLS transport. ## 1.3 -*2nd January 2014* +*2nd January 2015* ***Changes*** From e5bf9ddb47faeefe15dc4edd3d390b1f155c193e Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 14:39:07 +0000 Subject: [PATCH 66/75] Update README.md --- README.md | 115 ++++++++++++++++++++++++++++++++---------------------- 1 file changed, 68 insertions(+), 47 deletions(-) diff --git a/README.md b/README.md index 9706d36d..afaa4d4b 100644 --- a/README.md +++ b/README.md @@ -2,9 +2,9 @@ [![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) -Log Courier is a tool created to ship log files speedily and securely to -remote [Logstash](http://logstash.net) instances for processing whilst using -small amounts of local resources. The project is an enhanced fork of +Log Courier is a lightweight tool created to ship log files speedily and +securely, with low resource usage, to remote [Logstash](http://logstash.net) +instances. The project is an enhanced fork of [Logstash Forwarder](https://github.com/elasticsearch/logstash-forwarder) 0.3.1 with many fixes and behavioural improvements. @@ -12,43 +12,62 @@ with many fixes and behavioural improvements. **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* -- [Features](#features) -- [Installation](#installation) +- [Main Features](#main-features) +- [Differences to Logstash Forwarder](#differences-to-logstash-forwarder) - [Public Repositories](#public-repositories) - [RPM](#rpm) - [DEB](#deb) - [Building From Source](#building-from-source) - [Logstash Integration](#logstash-integration) -- [ZeroMQ support](#zeromq-support) - [Generating Certificates and Keys](#generating-certificates-and-keys) +- [ZeroMQ support](#zeromq-support) - [Documentation](#documentation) -## Features - -Log Courier implements the following features: +## Main Features -* Follow active log files -* Follow rotations -* Follow standard input stream -* Suspend tailing after periods of inactivity -* Set [extra fields](docs/Configuration.md#fields), supporting hashes and arrays -(`tags: ['one','two']`) +* Read events from a file or over a Unix pipeline +* Follow log file rotations and movements +* Close files after inactivity, reopening if they change +* Add [extra fields](docs/Configuration.md#fields) to events prior to shipping * [Reload configuration](docs/Configuration.md#reloading) without restarting -* Secure TLS shipping transport with server certificate verification -* TLS client certificate verification -* Secure CurveZMQ shipping transport to load balance across multiple Logstash -instances (optional, requires ZeroMQ 4+) -* Plaintext TCP shipping transport for configuration simplicity in local -networks -* Plaintext ZMQ shipping transport -* [Administration utility](docs/AdministrationUtility.md) to monitor the -shipping speed and status -* [Multiline](docs/codecs/Multiline.md) codec -* [Filter](docs/codecs/Filter.md) codec +* Ship events securely using TLS with server (and optionally client) certificate +verification +* Ship events securely to multiple Logstash instances using ZeroMQ with Curve +security (requires ZeroMQ 4+) +* Ship events in plaintext using TCP +* Ship events in plaintext using ZeroMQ (requires ZeroMQ 3+) +* Monitor shipping speed and status with the +[Administration utility](docs/AdministrationUtility.md) +* Pre-process events using codecs (e.g. [Multiline](docs/codecs/Multiline.md), +[Filter](docs/codecs/Filter.md)) * [Logstash Integration](docs/LogstashIntegration.md) with an input and output plugin +* Very low resource usage + +## Differences to Logstash Forwarder + +Log Courier is an enhanced fork of +[Logstash Forwarder](https://github.com/elasticsearch/logstash-forwarder) 0.3.1 +with many fixes and behavioural improvements. The primary changes are: + +* The publisher protocol was rewritten to avoid many causes of "i/o timeout" +that would result in duplicate events sent to Logstash +* The prospector and registrar were heavily revamped to handle log rotations and +movements far more reliably, and to report errors cleanly +* The harvester was improved to retry if an error occurred rather than stop +* The configuration can be reloaded without restarting +* An administration tool was created to display the shipping speed and status +* Fields configurations can contain arrays and dictionaries, not just strings +* Codec support has been added to allow multiline processing at the sender side +* A TCP transport was implemented to allow configuration without the need for +SSL certificates +* Support for client SSL certificate verification +* Peer IP address and certificate DN can be added to received events in Logstash +to distinguish events send from different instances +* Windows: Log files are not locked allowing log rotation to occur +* Windows: Log rotation is detected correctly ## Public Repositories @@ -66,30 +85,30 @@ To install the Log Courier repository, download the corresponding `.repo` configuration file below, and place it in `/etc/yum.repos.d`. Log Courier may then be installed using `yum install log-courier`. -* **CentOS/RedHat 6.x**: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) +* **CentOS/RedHat 6.x**: [driskell-log-courier-epel-6.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-6.repo) * **CentOS/RedHat 7.x**: [driskell-log-courier-epel-7.repo](https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/epel-6/driskell-log-courier-epel-7.repo) -***NOTE:*** *The RPM packages versions of Log Courier are built using ZeroMQ 3.2 and -therefore do not support the encrypted `zmq` transport. They do support the +***NOTE:*** *The RPM packages versions of Log Courier are built using ZeroMQ 3.2 +and therefore do not support the encrypted `zmq` transport. They do support the unencrypted `plainzmq` transport.* ### DEB -A Debian/Ubuntu compatible **PPA** repository is under consideration. At the moment, -no such repository exists. +A Debian/Ubuntu compatible **PPA** repository is under consideration. At the +moment, no such repository exists. ## Building From Source -You will need the following: +Requirements: 1. Linux, Unix, OS X or Windows 1. The [golang](http://golang.org/doc/install) compiler tools (1.2-1.4) 1. [git](http://git-scm.com) 1. GNU make -***Linux/Unix:*** *Most requirements can usually be installed by your favourite package -manager.* +***Linux/Unix:*** *Most requirements can usually be installed by your favourite +package manager.* ***OS X:*** *Git and GNU make are provided automatically by XCode.* ***Windows:*** *GNU make for Windows can be found [here](http://gnuwin32.sourceforge.net/packages/make.htm).* @@ -120,6 +139,21 @@ Install using the Logstash 1.5+ Plugin manager. Detailed instructions, including integration with Logstash 1.4.x, can be found on the [Logstash Integration](docs/LogstashIntegration.md) page. +## Generating Certificates and Keys + +Log Courier provides two commands to help generate SSL certificates and Curve +keys, `lc-tlscert` and `lc-curvekey` respectively. Both are bundled with the +packages provided by the public repositories. + +When building from source, running `make selfsigned` will automatically build +and run the `lc-tlscert` utility that can quickly and easily generate a +self-signed certificate for the TLS shipping transport. + +Likewise, running `make curvekey` will automatically build and run the +`lc-curvekey` utility that can quickly and easily generate CurveZMQ key pairs +for the CurveZMQ shipping transport. This tool is only available when Log +Courier is built with ZeroMQ >=4.0. + ## ZeroMQ support To use the 'plainzmq' or 'zmq' transports, you will need to install @@ -146,19 +180,6 @@ the Log Courier hosts are of the same major version. A Log Courier host that has ZeroMQ 4.0.5 will not work with a Logstash host using ZeroMQ 3.2.4 (but will work with a Logstash host using ZeroMQ 4.0.4.)** -## Generating Certificates and Keys - -Running `make selfsigned` will automatically build and run the `lc-tlscert` -utility that can quickly and easily generate a self-signed certificate for the -TLS shipping transport. - -Likewise, running `make curvekey` will automatically build and run the -`lc-curvekey` utility that can quickly and easily generate CurveZMQ key pairs -for the CurveZMQ shipping transport. This tool is only available when Log -Courier is built with ZeroMQ >=4.0. - -Both tools also generate the required configuration file snippets. - ## Documentation * [Administration Utility](docs/AdministrationUtility.md) From 899ff5095f09e02bf2c71e66a9b03750854e0a40 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 14:42:08 +0000 Subject: [PATCH 67/75] Update SRV configuration documentation --- docs/Configuration.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/docs/Configuration.md b/docs/Configuration.md index a90990fa..adc93734 100644 --- a/docs/Configuration.md +++ b/docs/Configuration.md @@ -36,6 +36,8 @@ - [`"curve secret key"`](#curve-secret-key) - [`"max pending payloads"`](#max-pending-payloads) - [`"reconnect"`](#reconnect) + - [`"rfc 2782 srv"`](#rfc-2782-srv) + - [`"rfc 2782 service"`](#rfc-2782-service) - [`"servers"`](#servers) - [`"ssl ca"`](#ssl-ca) - [`"ssl certificate"`](#ssl-certificate) @@ -430,18 +432,20 @@ use RFC 2782 style lookups of the form `_service._proto.example.com`. *String. Optional. Default: "courier"* Specifies the service to request when using RFC 2782 style SRV lookups. Using -the default, "courier", would result in a lookup for -`_courier._tcp.example.com`. +the default, "courier", an "@example.com" server entry would result in a lookup +for `_courier._tcp.example.com`. ### `"servers"` *Array of Strings. Required* -Sets the list of servers to send logs to. Accepted formats for each server entry are: +Sets the list of servers to send logs to. Accepted formats for each server entry +are: * `ipaddress:port` * `hostname:port` (A DNS lookup is performed) -* `@hostname` (A SRV DNS lookup is performed, with further DNS lookups if required) +* `@hostname` (A SRV DNS lookup is performed, with further DNS lookups if +required) The initial server is randomly selected. Subsequent connection attempts are made to the next IP address available (if the server had multiple IP addresses) or to From ba06d95da58801dbb88077d7d6d67e8253f3aa8e Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 14:51:48 +0000 Subject: [PATCH 68/75] Revert "Strip broken badges" This reverts commit 64b59b3a2a94a2f5d5b66560cfc43c4805fd181b. --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index afaa4d4b..10e353dc 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,4 @@ -# Log Courier - -[![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) +# Log Courier [![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) [![Latest Release](https://img.shields.io/github/release/driskell/log-courier.svg)](https://github.com/driskell/log-courier/releases/latest) [![Gem Version](https://img.shields.io/gem/v/log-courier.svg)](https://rubygems.org/gems/log-courier) Log Courier is a lightweight tool created to ship log files speedily and securely, with low resource usage, to remote [Logstash](http://logstash.net) From bc069065d8458138a91f301b1c418dde9d1a1d45 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 14:55:26 +0000 Subject: [PATCH 69/75] Update README.md badge layout and remove redundant gem badge --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 10e353dc..6e693f35 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,7 @@ -# Log Courier [![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) [![Latest Release](https://img.shields.io/github/release/driskell/log-courier.svg)](https://github.com/driskell/log-courier/releases/latest) [![Gem Version](https://img.shields.io/gem/v/log-courier.svg)](https://rubygems.org/gems/log-courier) +# Log Courier + +[![Build Status](https://img.shields.io/travis/driskell/log-courier/develop.svg)](https://travis-ci.org/driskell/log-courier) +[![Latest Release](https://img.shields.io/github/release/driskell/log-courier.svg)](https://github.com/driskell/log-courier/releases/latest) Log Courier is a lightweight tool created to ship log files speedily and securely, with low resource usage, to remote [Logstash](http://logstash.net) From 6a3b7eb5b20eed591e07db86ff90a74d65199241 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:01:30 +0000 Subject: [PATCH 70/75] Improve LSF differences in README.md --- README.md | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 6e693f35..19139e70 100644 --- a/README.md +++ b/README.md @@ -53,18 +53,20 @@ Log Courier is an enhanced fork of [Logstash Forwarder](https://github.com/elasticsearch/logstash-forwarder) 0.3.1 with many fixes and behavioural improvements. The primary changes are: -* The publisher protocol was rewritten to avoid many causes of "i/o timeout" -that would result in duplicate events sent to Logstash -* The prospector and registrar were heavily revamped to handle log rotations and +* The publisher protocol is rewritten to avoid many causes of "i/o timeout" +which would result in duplicate events sent to Logstash +* The prospector and registrar are heavily revamped to handle log rotations and movements far more reliably, and to report errors cleanly -* The harvester was improved to retry if an error occurred rather than stop +* The harvester is improved to retry if an error occurred rather than stop * The configuration can be reloaded without restarting -* An administration tool was created to display the shipping speed and status +* An administration tool is available which can display the shipping speed and +status of all watched log files * Fields configurations can contain arrays and dictionaries, not just strings -* Codec support has been added to allow multiline processing at the sender side -* A TCP transport was implemented to allow configuration without the need for -SSL certificates -* Support for client SSL certificate verification +* Codec support is available which allows multiline processing at the sender +side +* A TCP transport is available which removes the requirement for SSL +certificates +* There is support for client SSL certificate verification * Peer IP address and certificate DN can be added to received events in Logstash to distinguish events send from different instances * Windows: Log files are not locked allowing log rotation to occur From b54e711df7124d9334b9eabe2ab16b6df9506657 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:03:44 +0000 Subject: [PATCH 71/75] Update changelog --- docs/ChangeLog.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/ChangeLog.md b/docs/ChangeLog.md index c6cc13e9..da5704b4 100644 --- a/docs/ChangeLog.md +++ b/docs/ChangeLog.md @@ -72,6 +72,7 @@ log target every 10 seconds. Only a single error should be reported (#119) accept (#118) * Fix Logstash input plugin crash with plainzmq and zmq transports when the listen address is already in use (Thanks to @mheese - #112) +* Add support for SRV records in the servers configuration (#85) ***Security*** From cb2f4bd13199224ff7e19d61eb0a64122fbed6be Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:07:37 +0000 Subject: [PATCH 72/75] Fix log-courier gemspec --- log-courier.gemspec.tmpl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/log-courier.gemspec.tmpl b/log-courier.gemspec.tmpl index ad6ae7e7..a081390b 100644 --- a/log-courier.gemspec.tmpl +++ b/log-courier.gemspec.tmpl @@ -11,7 +11,7 @@ Gem::Specification.new do |gem| gem.require_paths = ['lib'] gem.files = %w( lib/log-courier/client.rb - lib/log-courier/client_tls.rb + lib/log-courier/client_tcp.rb lib/log-courier/event_queue.rb lib/log-courier/server.rb lib/log-courier/server_tcp.rb From 79c9ed26ac05592fd64a0bdfd72e859b17a248ba Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:07:53 +0000 Subject: [PATCH 73/75] Allow version specification in Makefile calls --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 2af416b3..c27163de 100644 --- a/Makefile +++ b/Makefile @@ -100,7 +100,7 @@ ifneq ($(implyclean),yes) endif fix_version: - build/fix_version + build/fix_version "${FIX_VERSION}" setup_root: build/setup_root From a5885276523c49b484d95a007e65e0c9ec17d097 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:08:24 +0000 Subject: [PATCH 74/75] Update spec file --- contrib/rpm/log-courier.spec | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/contrib/rpm/log-courier.spec b/contrib/rpm/log-courier.spec index 0ead8b28..cd54abcb 100644 --- a/contrib/rpm/log-courier.spec +++ b/contrib/rpm/log-courier.spec @@ -32,10 +32,8 @@ Requires: zeromq3 Requires: logrotate %description -Log Courier is a tool created to transmit log files speedily and securely to -remote Logstash instances for processing whilst using small amounts of local -resources. The project is an enhanced fork of Logstash Forwarder 0.3.1 with many -enhancements and behavioural improvements. +Log Courier is a lightweight tool created to ship log files speedily and +securely, with low resource usage, to remote Logstash instances. %prep %setup -q -n %{name}-%{version} @@ -71,6 +69,8 @@ mkdir -p %{buildroot}%{_sysconfdir}/init.d install -m 0755 contrib/initscripts/redhat-sysv.init %{buildroot}%{_sysconfdir}/init.d/log-courier touch %{buildroot}%{_var}/run/log-courier.pid %endif +mkdir -p %{buildroot}%{_sysconfdir}/sysconfig +install -m 0644 contrib/initscripts/log-courier.sysconfig %{buildroot}%{_sysconfdir}/sysconfig/log-courier # Make the state dir mkdir -p %{buildroot}%{_var}/lib/log-courier @@ -117,7 +117,9 @@ fi %endif %defattr(0644,root,root,0755) -%{_sysconfdir}/log-courier +%dir %{_sysconfdir}/log-courier +%{_sysconfdir}/log-courier/examples +%config(noreplace) %{_sysconfdir}/sysconfig/log-courier %if 0%{?rhel} < 7 %ghost %{_var}/run/log-courier.pid %endif @@ -127,6 +129,12 @@ fi %ghost %{_var}/lib/log-courier/.log-courier %changelog +* Mon Jan 5 2015 Jason Woods - 1.3-1 +- Upgrade to v1.3 + +* Wed Dec 3 2014 Jason Woods - 1.2-5 +- Upgrade to v1.2 final + * Sat Nov 8 2014 Jason Woods - 1.2-4 - Upgrade to v1.2 - Fix stop message on future upgrade From 666ead365030eed415fb5070629903c5122562d0 Mon Sep 17 00:00:00 2001 From: Jason Woods Date: Sat, 28 Feb 2015 15:10:10 +0000 Subject: [PATCH 75/75] Set version 1.5 in spec and version txt --- contrib/rpm/log-courier.spec | 7 +++++-- version_short.txt | 2 +- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/contrib/rpm/log-courier.spec b/contrib/rpm/log-courier.spec index cd54abcb..729217d7 100644 --- a/contrib/rpm/log-courier.spec +++ b/contrib/rpm/log-courier.spec @@ -4,8 +4,8 @@ Summary: Log Courier Name: log-courier -Version: 1.2 -Release: 4%{dist} +Version: 1.5 +Release: 1%{dist} License: GPL Group: System Environment/Libraries Packager: Jason Woods @@ -129,6 +129,9 @@ fi %ghost %{_var}/lib/log-courier/.log-courier %changelog +* Sat Feb 28 2015 Jason Woods - 1.5-1 +- Upgrade to v1.5 + * Mon Jan 5 2015 Jason Woods - 1.3-1 - Upgrade to v1.3 diff --git a/version_short.txt b/version_short.txt index 7e32cd56..c239c60c 100644 --- a/version_short.txt +++ b/version_short.txt @@ -1 +1 @@ -1.3 +1.5