8000 Python Topology DSL · Issue #84 · pystorm/streamparse · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Python Topology DSL #84

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
amontalenti opened this issue Dec 2, 2014 · 54 8000 comments
Closed

Python Topology DSL #84

amontalenti opened this issue Dec 2, 2014 · 54 comments
Assignees
Milestone

Comments

@amontalenti
Copy link
Contributor

We currently rely on lein and the Clojure DSL to build topologies. This is nice because Clojure DSL is bundled in Storm and it allows us to freely mix Python components with JVM (and even, other multi-lang) components. And via Clojure, we get local debugging for free via LocalCluster.

But it's also an impediment to new users coming to streamparse expecting "pure Python" support for Storm. See for example this Twitter conversation:

https://twitter.com/sarthakdev/status/539390816339247104

The Clojure DSL was chosen for expediency, but for pure Python topologies, a Python DSL might be even better and allow the streamparse user not to learn much about Java/Clojure. The recently-released pyleus approach to this problem is to provide a YAML DSL and a Java builder tool.

One approach to a Python DSL would be to leverage some new work going on in Storm core to make Topology configuration dynamic via JSON, as described in #81. Another option would be to have the Python DSL actually generate Clojure DSL, which would then be compiled. I haven't currently decided on the best course of action but I am personally interested in building the Python DSL to make streamparse more usable by Pythonistas out-of-the-box.

@dan-blanchard
Copy link
Member

I must say that the YAML DSL pyleus has is even more readable than a Python DSL would be. Unless we have a reason for wanting to make people use a Python DSL, I'd actually recommend trying to use the JSON format proposed in STORM-561, and generating the Clojure DSL from that until it gets supported directly in Storm.

@amontalenti
Copy link
Contributor Author

Don't you think the JSON format that is in STORM-561 is way too verbose? It would require things like fully-qualified Java class names for each component, etc.

@amontalenti amontalenti changed the title Python DSL Alternative to Clojure DSL for topology definition Dec 3, 2014
@dan-blanchard
Copy link
Member

Good point. I only briefly skimmed it, so it didn't jump out at me immediately. I was just trying to avoid us having to develop yet another Storm DSL.

Maybe we could just expand on the YAML DSL Pyleus has to allow for specifying what language a component is written in. We could generate the Clojure DSL in the short-term from the YAML and the verbose JSON in the long-term.

@amontalenti
Copy link
Contributor Author

Some commentary from Nathan Marz about this topic from STORM-561:

"What I'm proposing instead is to ditch the idea of specifying topologies via configuration files and do it instead via an interpreted general purpose programming language (like Python). By using an interpreted language, you can construct and submit topologies without having to do a compilation, which is the entire purpose of this issue. You can use Java spouts and bolts in Python just as easily as you can within Java or from within a YAML or JSON file. I guarantee you can make a library for specifying topologies that's as nice as doing so via configuration files, except you never lose the power of a general purpose programming language."

@dan-blanchard
Copy link
Member

But that will mean needing to communicate with Nimbus via Thrift, won't it? The official Python Thrift library isn't Python 3 compliant (although thriftpy is, and I submitted a PR a while ago to make it compatible with the Storm Thrift files).

@amontalenti
Copy link
Contributor Author

Yes, I think he's in favor of doing the topology setup with Thrift (which, for me, means it would need to work with thriftpy since requiring local thrift installations is a non-starter, IMO).

@amontalenti
Copy link
Contributor Author

One other concern I have with this approach is that configuring topologies with Thrift means you can't use them with LocalCluster (local testing mode), I think. I might be wrong on that. I also wonder how one handles all the JAR building if you don't have lein available.

@amontalenti amontalenti changed the title Alternative to Clojure DSL for topology definition Python Topology DSL (supplanting use of Storm's bundled Clojure DSL) Apr 12, 2015
@amontalenti amontalenti changed the title Python Topology DSL (supplanting use of Storm's bundled Clojure DSL) Python Topology DSL Apr 12, 2015
@amontalenti
Copy link
Contributor Author

We have a little example for the Word Count Topology from examples:

class WordCount(Topology):
    word_spout = WordSpout.spec(
            paralellism=2)
    word_count_bolt = WordCountBolt.spec(
            input=WordSpout,
            group_on="word",
            parallelism=8)

Working w/ @omus @cabiad @becitratul at PyCon sprints, this is a little sketch we started to put together.

Any thoughts on this look-and-feel, @dan-blanchard @kbourgoin @emmett9001 @msukmanowsky? We are thinking that the components themselves would include a property for the list of streams and fields they are emitting.

@dan-blanchard
Copy link
Member

I think it's a nice start. That said, I think it would be more intuitive if input were set to word_spout instead of WordSpout. Otherwise it feels a little like assigning them to variables doesn't accomplish much.

Would the DSL only be for pure Python topologies? I could imagine expanding it to support both shell and Java bolts/spouts via ShellBolt, ShellSpout, JavaBolt, and JavaSpout classes.

@amontalenti
Copy link
Contributor Author

@dan-blanchard I agree -- we actually had two variants here, one where it was specified as a string:

    word_count_bolt = WordCountBolt.spec(
            input=WordSpout) #=> expands to ("word_spout", "default")

or:

            input="word_spout" #=> expands to ("word_spout", "default")

or:

            input=("word_spout", "default") #=> no sugar

So, the idea here was that if you point to the class, the assumption is there is only one instance of that class in your topology. (Otherwise, Topology is ill-defined; error thrown.) In the second variant, the idea is that there must be a field named in the Topology class which will resolve to that component -- and, the "default" stream is implied. The final variant, the tuple form, specifies both a component ID (via Python field name) and a stream ID (in this case, "default").

Makes sense?

@amontalenti
Copy link
Contributor Author

@dan-blanchard Re: the question about pure Python topologies, that's all I'm starting with. Java and other shell components would come later. After all, the first step here is auto-generating a CLJ file, so for now, doing true multi-lang could still happen with some Storm / Clojure DSL fu.

@amontalenti
Copy link
Contributor Author

collaborating on this here: https://floobits.com/amontalenti/streamparse

@amontalenti
Copy link
Contributor Author

@amontalenti
Copy link
Contributor Author

Plan:

  • Sketch out the Topology DSL
  • Write simple validator to make sure parameters for spec() are correct
  • What stream / field types are there beyond :shuffle and grouped (:direct, :all?)
  • Write a renderer to our topology.clj Clojure files
  • Test that some example topologies rewritten as Python DSL actually work in Clojure form

Optional:

  • Validate whether the DAG is actually acyclic

@becitratul
Copy link

According to the document : http://storm.apache.org/documentation/Clojure-DSL.html

A stream grouping can be one of the following:

:shuffle: subscribes with a shuffle grouping
Vector of field names, like ["id" "name"]: subscribes with a fields grouping on the specified fields
:global: subscribes with a global grouping
:all: subscribes with an all grouping
:direct: subscribes with a direct grouping

in details: http://storm.apache.org/documentation/Concepts.html
Stream groupings

Part of defining a topology is specifying for each bolt which streams it should receive as input. A stream grouping defines how that stream should be partitioned among the bolt's tasks.

There are seven built-in stream groupings in Storm, and you can implement a custom stream grouping by implementing the CustomStreamGrouping interface:

1.Shuffle grouping: Tuples are randomly distributed across the bolt's tasks in a way such that each bolt is guaranteed to get an equal number of tuples.

2.Fields grouping: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the "user-id" field, tuples with the same "user-id" will always go to the same task, but tuples with different "user-id"'s may go to different tasks.

3.Partial Key grouping: The stream is partitioned by the fields specified in the grouping, like the Fields grouping, but are load balanced between two downstream bolts, which provides better utilization of resources when the incoming data is skewed. This paper provides a good explanation of how it works and the advantages it provides.

4.All grouping: The stream is replicated across all the bolt's tasks. Use this grouping with care.

5.Global grouping: The entire stream goes to a single one of the bolt's tasks. Specifically, it goes to the task with the lowest id.

6.None grouping: This grouping specifies that you don't care how the stream is grouped. Currently, none groupings are equivalent to shuffle groupings. Eventually though, Storm will push down bolts with none groupings to execute in the same thread as the bolt or spout they subscribe from (when possible).

7.Direct grouping: This is a special kind of grouping. A stream grouped this way means that the producer of the tuple decides which task of the consumer will receive this tuple. Direct groupings can only be declared on streams that have been declared as direct streams. Tuples emitted to a direct stream must be emitted using one of the [emitDirect](/javadoc/apidocs/backtype/storm/task/OutputCollector.html#emitDirect(int, int, java.util.List) methods. A bolt can get the task ids of its consumers by either using the provided TopologyContext or by keeping track of the output of the emit method in OutputCollector (which returns the task ids that the tuple was sent to).

8.Local or shuffle grouping: If the target bolt has one or more tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, this acts like a normal shuffle grouping.

@amontalenti
Copy link
Contributor Author

Thank you, @becitratul -- this is now in this branch:

https://github.com/Parsely/streamparse/blob/837a6c689989b201868587a8d640f088af996cea/streamparse/dsl/topology.py#L25-L33

This is the set of groupings will have the DSL support for now. Note that the enum Grouping can be used for convenience, e.g. Grouping.SHUFFLE or Grouping.fields("word"). However, you can also just use the bare Python structs as a shorthand, e.g. ":shuffle" or ["word"]. This is for the spec() function, e.g. MyBolt.spec(group_on=["word"], ...).

@amontalenti
Copy link
Contributor Author

The image on the left is looking a little better than the one on the right, eh? cc @kbourgoin @dan-blanchard @msukmanowsky

topology_dsl

@msukmanowsky
Copy link
Contributor

👍 but it could be improved even more if you swapped out Vim and Linux.

@mlaprise
Copy link

awesome !

@dfdeshom
Copy link

+1

@omus
8000 Copy link
Contributor
omus commented Apr 15, 2015

I'm almost done the validator. Expect to see something tomorrow.

@dan-blanchard
Copy link
Member

Could you give an example of an error that would come up if people override the __init__ method? As I see it, that's going to break a whole lot of crap if they forget to call Component.__init__ either way.

The part that made me really want to move this stuff into the Component class itself is that it feels like the right place for people to be defining output_fields (which addressed by the current version of the DSL) for their components.

At the very least I think we should make it so output_fields is a pystorm.Component attribute that people set like this:

class WordSpout(Spout):
    output_fields = ['word']

that way be a slightly different issue though.

@dan-blanchard
Copy link
Member

That the instances representing DAG nodes are the same exact type as the instances representing instances of running topology components feels un-Pythonic to me.

I feel like it makes more sense when you consider that no one ever creates instances of these classes manually. They're created by streamparse.run automatically. As far as the average user is concerned, the only time they ever actually instantiate them would be in the DSL.

@amontalenti
Copy link
Contributor Author

I see the point about output fields. I just don't get why WordCountBolt() feels right vs Spec(WordCountBolt) for something that isn't actually representing an instance of WordCountBolt.

It would be like saying User() represents the user table in Django ORM, for the purpose of schema initialization. Just feels wrong to me. Instances of objects shouldn't masquerade as descriptor instances of the class of the object. Put another way, the semantic type of an object instance shouldn't change depending on which args you happen to use for its initializer.

I could divine all sorts of contrived uses of init that would break when the class is instantiated in the topology builder rather than the remote deployment environment, but that isn't the crux of my concern.

@dan-blanchard
Copy link
Member

@kbourgoin, when you get a chance, your thoughts on what the DSL should look like would be greatly appreciated. You only need to read from this comment down.

@rduplain
Copy link
Contributor

I am +1 to Component.spec(**parameters) over Component(**parameters) for topology definitions. Add @dan-blanchard's improvements:

  • Match parameter names to those in the thrift spec.
  • Support referencing a spec by object name when handling inputs.
  • Support class attributes for default parameters which are typically decided when writing the component class (see below).

Reviewing this thread from this comment on, topics are:

  • Object-Oriented Design
  • Domain-Specific Language Design
  • Topology Parameters

Object-Oriented Design

Using Component(**parameters) breaks at least two SOLID principles, most notably the single responsibility principle. It would have the Component both represent the task within the topology and its place within the topology, as @amontalenti points out. While users never instantiate these classes directly, it is useful to understand Storm's entities and how the Component class leads to a task in Storm.

About subclassing Component without calling __init__: the guidance to users is "Don't do that." There should be no problem in using super, and it's a well-known pattern.

Domain-Specific Language Design

The differences in the DSL itself between the two approaches:

  • Remove five characters for each component spec. (-1 from me.)
  • Match parameter names to those in the thrift spec. (+1 from me.)
  • Support referencing a spec by object name when handling inputs. (+1 from me.)

Topology Parameters

There are some parameters in the topology spec which are attributes of the component, not of the topology, at least in terms of sensible defaults. When writing a component, you are going to decide on (default) naming conventions for attributes like output_fields and group_on.

  • Support class attributes for default parameters which are typically decided when writing the component class. (+1 from me.)

Attributes like parallelism_hint are truly topology attributes, but since it's called a "hint," I could see how users would want to include that on the component. That said, the topology spec conveys more complete information if it explicitly includes input/output and parallelism.

@dan-blanchard
Copy link
Member

I am +1 to Component.spec(**parameters) over Component(**parameters) for topology definitions.

I just realized this comment thread was never updated with the current state of things. Currently we have:

class WordCount(Topology):
    word_spout = Spec(WordSpout, parallelism=2)
    word_bolt = Spec(WordCountBolt, source=word_spout, group_on=Grouping.fields("word"),
                     parallelism=8)

So, @rduplain, what are your thoughts on Spec vs Component.spec? If we keep Spec, we can leave most of the DSL code out of pystorm (although there should still be a few additions for things like output_fields and streams).

@rduplain
Copy link
Contributor

The streamparse Component subclass can add the .spec class method, if your goal is to keep the DSL out of pystorm.

@dan-blanchard
Copy link
Member

Good point. I think I initially argued against .spec, but now I'm beginning to think it's the cleaner approach, because it more concretely relates back to the Component class.

@dan-blanchard
Copy link
Member

After some offline discussion with @rduplain I think we're approaching something that looks like this (for a really messy topology):

class SuperComplexTopology(Topoology):
    multi_spout1 = MultiSpout.spec(name='multi_spout1', 
                                   streams=[Stream(fields=['foo', 'bar']),
                                            Stream(name='direct',
                                                   fields=['dir1', 'dir2'], 
                                                   direct=True)],
                                   parallelism_hint=4)
    simple_spout = SimpleSpout.spec(parallelism_hint=2)
    batching_bolt = SomeBatchingBolt.spec(inputs=[multi_spout1, simple_spout],
                                          streams=[Stream(['sum'])],
                                          json_conf={'topology.tick.tuple.freq.secs': 1})
    directed_bolt = SomeDirectedBolt.spec(inputs={multi_spout1['direct']: Grouping.direct})
    perl_bolt = ShellBolt(execution_command='perl', script='bolt.pl') \
                .spec(inputs={simple_spout: Grouping.fields('junk')},
                      streams=[Stream(fields=['field1', 'field2'])])
    ruby_bolt = ShellBolt(execution_command='ruby', script='bolt.rb') \
                .spec(inputs={simple_spout: Grouping.fields('junk')})
    java_bolt = JavaBolt(full_class_name='com.parsely.yucky.YuckyJavaBolt', 
                         args_list=[45, 'arg2']) \
                .spec(inputs=[perl_bolt])

where Stream, Topology, Grouping, JavaBolt, and ShellBolt would be provided by the streamparse.dsl package, and SimpleSpout is assumed to have the class attribute output_fields defined as:

class SimpleSpout(Spout):
    output_fields = ['junk']

The idea is that people could define output_fields when writing their classes so they don't have to define streams when calling Component.spec.

It was suggested we could also make Stream.__init__ take *fields before its kwargs so people could do Stream('foo', 'bar'), but I'm not completely sold on the idea because Stream('foo') looks like a stream named "foo" to me, rather than a stream called "default" with one output field called "foo".

@kbourgoin
Copy link
Member

Just a few thoughts:

  1. If you're going to have inputs, naming it outputs instead of streams might make things clearer.
  2. If you have output_fields and streams defined, which output does it expect? This may be a case where looking for convenience makes things less clear.

+1 to kwargs for Stream. It definitely looks like a stream named foo.

@emmettbutler
Copy link
Contributor

I agree that Stream('foo') looks like a stream named foo, and that using kwargs is probably clearer.

What is Stream(['sum'])? It's not clear what 'sum' means in that context, or why it's in a single-element list.

How do we know all of the keys available in multi_spout1?

@kbourgoin
Copy link
Member

With regard to output_fields it also makes it unpredictable where you need to look when you're writing the bolt that consumes it. It's entirely possible to look at it, see output_fields and end up incorrect. I think having the spec as the canonical source of this may help things be more explicit.

@dan-blanchard
Copy link
Member

If you're going to have inputs, naming it outputs instead of streams might make things clearer.

@kbourgoin We were trying to keep the names the same as the underlying Thrift ones for consistency, but I guess that's not really necessary. Readability should be the goal in the end.

If you have output_fields and streams defined, which output does it expect? This may be a case where looking for convenience makes things less clear.

Streams defined as an argument to spec would win out. We were thinking output_fields would be for simple default cases.

@dan-blanchard
Copy link
Member

What is Stream(['sum'])? It's not clear what 'sum' means in that context, or why it's in a single-element list.

This is a Stream with one field called sum. I just put it in there because people can pass kwargs without names, as annoying as that can be.

How do we know all of the keys available in multi_spout1?

The keys are the names of the streams. It seemed a little nicer than adding another classmethod called stream to specify which stream you want to serve as input.

@dfdeshom
Copy link
dfdeshom commented Nov 5, 2015

should name be required for Stream()? I'm looking at [Stream(fields=['field1', 'field2']) and I have no idea where it ends up or if it's consumed by something else.

looks like inputs can be a list or hash. When it's a list, does that mean the grouping is random?

@dfdeshom
Copy link
dfdeshom commented Nov 5, 2015

+1 to removing output_fields from spouts, and renaming streams to outputs or something similar.

For components that don't need configuration, does SimpleSpout().spec(parallelism_hint=2 also work over SimpleSpout.spec(parallelism_hint=2)?

I see that the java bolt takes another bolt (not a stream) as its input. Are these interchangeable?

Is args_list also available to ShellBolt?

@dan-blanchard
Copy link
Member

should name be required for Stream()? I'm looking at [Stream(fields=['field1', 'field2']) and I have no idea where it ends up or if it's consumed by something else.

The default name for streams in Storm is "default", so it would be called "default" in that case.

looks like inputs can be a list or hash. When it's a list, does that mean the grouping is random?

Yeah, when given a list, it would use the default shuffle grouping, just like the Clojure DSL.

@dan-blanchard
Copy link
Member

For components that don't need configuration, does SimpleSpout().spec(parallelism_hint=2 also work over SimpleSpout.spec(parallelism_hint=2)?

SimpleSpout() would instantiate the spout, which isn't something we really want people doing, since that might run code that doesn't make sense outside of the topology (like things that connect to a DB, etc.).

I see that the java bolt takes another bolt (not a stream) as its input. Are these interchangeable?

inputs is always either a list of components (i.e., bolts and spouts), or a dict that maps from components to groupings.

Is args_list also available to ShellBolt?

No, because ShellBolt and JavaBolt are just thin wrappers around the underlying Thrift classes that Storm uses for serializing the topology spec. ShellBolt just takes a command and a script (although script can actually be a space delimiter set of arguments), and JavaBolt.arg_list is the list of arguments that are given to the constructor in Java.

@dan-blanchard
Copy link
Member

+1 to removing output_fields from spouts, and renaming streams to outputs or something similar.

Hmm... maybe my example was a little to complex. @rduplain and I went into this assuming that output_fields would be used by people 90% of the time, and then if people wanted to override that setting for a particular instance of a bolt/spout, they would do that in the topology spec.

@dan-blanchard
Copy link
Member

After more deliberation with @kbourgoin offline, we're going to make the following changes from what I last gave:

  1. streams will no long be an argument to the spec class method. There will instead be a class attribute called outputs (purposefully vague) that can be either:
  • A list of strings, in which case this is assumed to be the field names for the default stream.
  • A list of Stream objects

I'll add some validation so people can't go crazy and mix and match.
2. Names for arguments will be less verbose and more Pythonic. This will lose some of the parity with the underlying Thrift classes, but honestly those are such a mess anyway (because there's no inheritance, so there are many little classes that all contain instances of each other), that this is not something we should really be subjecting our users to. This means we'll have parallelism instead of parallelism_hint, command instead of execution_command, and conf instead of json_conf (and we'll convert it to JSON on our end automatically).

This leaves us with:

class SuperComplexTopology(Topoology):
    multi_spout1 = MultiSpout.spec(name='multi_spout1', parallelism=4)
    simple_spout = SimpleSpout.spec(parallelism=2)
    batching_bolt = SomeBatchingBolt.spec(inputs=[multi_spout1, simple_spout],
                                          conf={'topology.tick.tuple.freq.secs': 1})
    directed_bolt = SomeDirectedBolt.spec(inputs={multi_spout1['direct']: Grouping.direct})
    perl_bolt = ShellBolt(command='perl', script='bolt.pl', 
                          outputs=[Stream(fields=['field1', 'field2'])]) \
                .spec(inputs={simple_spout: Grouping.fields('junk')})
    ruby_bolt = ShellBolt(command='ruby', script='bolt.rb', outputs=['foo']) \
                .spec(inputs={simple_spout: Grouping.fields('junk')})
    java_bolt = JavaBolt(full_class_name='com.parsely.yucky.YuckyJavaBolt', 
                         args_list=[45, 'arg2'], outputs=['coffee']) \
                .spec(inputs=[perl_bolt])

where Stream, Topology, Grouping, JavaBolt, and ShellBolt would be provided by the streamparse.dsl package, and SimpleSpout, MultiSpout, and SomeBatchingBolt have define the class attribute outputs like:

class MultiSpout(Spout):
    outputs = [Stream(fields=['foo', 'bar']), 
               Stream(name='direct', fields=['dir1', 'dir2'], direct=True)]
    ...

class SimpleSpout(Spout):
    outputs = ['junk']
    ...

class SomeBatchingBolt(BatchingBolt):
    outputs = [Stream(['sum'])]
    ...

The only point of debate that I think might be left is how to handle JavaBolt and ShellBolt. I kind of like the idea of making spec take the same arguments regardless of which class its for, for consistency, but I can also see the benefits getting rid of the JavaBolt and ShellBolt constructor and having a more concise DSL. This would give us:

class SuperComplexTopology(Topoology):
    multi_spout1 = MultiSpout.spec(name='multi_spout1', parallelism=4)
    simple_spout = SimpleSpout.spec(parallelism=2)
    batching_bolt = SomeBatchingBolt.spec(inputs=[multi_spout1, simple_spout],
                                          conf={'topology.tick.tuple.freq.secs': 1})
    directed_bolt = SomeDirectedBolt.spec(inputs={multi_spout1['direct']: Grouping.direct})
    perl_bolt = ShellBolt.spec(command='perl', script='bolt.pl', 
                               outputs=[Stream(fields=['field1', 'field2'])],
                               inputs={simple_spout: Grouping.fields('junk')})
    ruby_bolt = ShellBolt.spec(command='ruby', script='bolt.rb', outputs=['foo'],
                               inputs={simple_spout: Grouping.fields('junk')})
    java_bolt = JavaBolt.spec(full_class_name='com.parsely.yucky.YuckyJavaBolt', 
                              args_list=[45, 'arg2'], outputs=['coffee'], 
                              inputs=[perl_bolt])

For now I'm going to proceed with everything assuming we want the first approach, where spec is the same for all of these classes, and ShellBolt and JavaBolt aren't weird for having an outputs argument to spec when none of the other classes can have that. Any comments about it are welcome though.

@dan-blanchard
Copy link
Member

I change my mind. I'm going forward assuming we don't want to instantiate ShellBolt and JavaBolt. It's just so much cleaner looking.

@rduplain
Copy link
Contributor
rduplain commented Nov 6, 2015

+1.

@dan-blanchard
Copy link
Member

Oh, and this is super minor, but I'm going to say that if someone specifies a direct stream as an input in the list context (i.e., they didn't provide a grouping), we should default to the direct instead of shuffle, because that's the only valid grouping in that case.

@dfdeshom
Copy link
dfdeshom commented Nov 6, 2015

I change my mind. I'm going forward assuming we don't want to instantiate ShellBolt and JavaBolt. It's just so much cleaner looking.

+1. Does that mean that command will be passed to the spec method for ShellBolt?

@dan-blanchard
Copy link
Member

+1. Does that mean that command will be passed to the spec method for ShellBolt?

Affirmative

@dan-blanchard
Copy link
Member

Done as of #199 being merged into master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

0