Skip to content

Commit 312c43c

Browse files
committed
Updated CHANGELOG, README
1 parent 628bb87 commit 312c43c

File tree

3 files changed

+99
-103
lines changed

3 files changed

+99
-103
lines changed

CHANGES.md

Lines changed: 37 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -1,113 +1,98 @@
1-
### 1.1.8
2-
(Apr 26, 2019)
1+
## 0.10.2 - Apr 26, 2019
32

43
- Refactored CLI commands and logging format
54
- Added factory methods to supervisors
65
- Fixed bug in rabbitmq backend module
76

8-
### 1.1.7
9-
(Apr 26, 2019)
7+
## 0.10.1 - Apr 26, 2019
108

119
- Moved factory logic for client creation to from_url method on client module
1210
- Added TasqFuture result from clients result, to return more structured results
1311
with additional informations about execution.
1412

15-
### 1.1.6
16-
(Apr 22, 2019)
13+
## 0.10.0 - Apr 22, 2019
1714

1815
- Added a TasqQueue class for more convenient uses
1916
- Fixed some bugs
20-
21-
(Apr 22, 2019)
22-
2317
- Renamed `master` -> `supervisor`
2418
- Added RabbitMQ to supported backends, still working on a common interface
2519
- Refactored some parts on connection
2620

27-
### 1.1.0
28-
(Mar 23, 2019)
21+
## 0.9.0 - Mar 23, 2019
2922

3023
- Refactored log system
3124
- Started backend broker support for job queues and persistence
3225
- Add redis client
3326

34-
### 1.0.1
35-
(Jul 15, 2018)
27+
## 0.8.0 - Jul 15, 2018
3628

37-
- Added repeated jobs capabilities to process/thread queue workers too (Previously only Actor
38-
worker could achieve that)
39-
- Fixed some bugs, renamed `ProcessWorker` -> `QueueWorker` and `ProcessMaster` -> `QueueMaster`
29+
- Added repeated jobs capabilities to process/thread queue workers too
30+
(Previously only Actor worker could achieve that)
31+
- Fixed some bugs, renamed `ProcessWorker` -> `QueueWorker` and
32+
`ProcessMaster` -> `QueueMaster`
4033

41-
### 1.0.0
42-
(Jul 14, 2018)
34+
## 0.7.0 - Jul 14, 2018
4335

44-
- Added the possibility to choose the type of workers of each master process, can be either a pool
45-
of actors or a pool of processes, based on the nature of the majority of the jobs that need to be
46-
executed. A majority of I/O bound operations should stick to `ActorMaster` type workers, in case
47-
of CPU bound tasks `QueueMaster` should give better results.
36+
- Added the possibility to choose the type of workers of each master process,
37+
can be either a pool of actors or a pool of processes, based on the nature of
38+
the majority of the jobs that need to be executed. A majority of I/O bound
39+
operations should stick to `ActorMaster` type workers, in case of CPU bound
40+
tasks `QueueMaster` should give better results.
4841

49-
### 0.9.0
50-
(May 18, 2018)
42+
## 0.6.1 - May 18, 2018
5143

52-
- Decoupled connection handling from tasq.remote.master and tasq.remote.client into a dedicated
53-
module tasq.remote.connection
44+
- Decoupled connection handling from `tasq.remote.master` and `tasq.remote.client`
45+
into a dedicated module tasq.remote.connection
5446

55-
### 0.8.0
56-
(May 17, 2018)
47+
## 0.6.0 - May 17, 2018
5748

58-
- Simple implementation of digital signed data sent through sockets, this way sender and receiver
59-
have a basic security layer to check for integrity and legitimacy of received data
49+
- Simple implementation of digital signed data sent through sockets, this way
50+
sender and receiver have a basic security layer to check for integrity and
51+
legitimacy of received data
6052

61-
### 0.7.0
62-
(May 14, 2018)
53+
## 0.5.0 - May 14, 2018
6354

64-
- Added a ClientPool implementation to schedule jobs to different workers by using routers
65-
capabilities
55+
- Added a ClientPool implementation to schedule jobs to different workers by
56+
using routers capabilities
6657

67-
### 0.6.0
68-
(May 6, 2018)
58+
## 0.4.0 - May 6, 2018
6959

70-
- Refactored client code, now it uses a Future system to handle results and return a future even
71-
while scheduling a job in a non-blocking manner
60+
- Refactored client code, now it uses a Future system to handle results and
61+
return a future even while scheduling a job in a non-blocking manner
7262
- Improved logging
7363
- Improved representation of a Job in string
7464

75-
### 0.5.0
76-
(May 5, 2018)
65+
## 0.3.0 - May 5, 2018
7766

7867
- Added first implementation of delayed jobs
7968
- Added first implementation of interval-scheduled jobs
8069
- Added a basic ActorSystem like and context to actors
81-
- Refactored some parts, removed Singleton and Configuration classes from __init__.py
8270

83-
### 0.3.0:
84-
(May 1, 2018)
71+
- Refactored some parts, removed Singleton and Configuration classes from
72+
__init__.py
73+
74+
## 0.2.1 - May 1, 2018
8575

8676
- Fixed minor bug in initialization of multiple workers on the same node
8777
- Added support for pending tasks on the client side
8878

89-
### 0.2.0:
90-
(Apr 30, 2018)
79+
## 0.2.0 - Apr 30, 2018
9180

9281
- Renamed some modules
9382
- Added basic logging to modules
9483
- Defined a client supporting sync and async way of scheduling jobs
9584
- Added routing logic for worker actors
9685
- Refactored code
9786

98-
### 0.1.2
99-
(Apr 29, 2018)
87+
## 0.1.2 - Apr 29, 2018
10088

10189
- Added asynchronous way of handling communication on ZMQ sockets
10290

103-
### 0.1.1:
104-
(Apr 28, 2018)
91+
## 0.1.1 - Apr 28, 2018
10592

10693
- Switch to PUSH/PULL pattern offered by ZMQ
10794
- Subclassed ZMQ sockets in order to handle cloudpickle serialization
10895

109-
### 0.1.0:
110-
111-
(Apr 26, 2018)
96+
## 0.1.0 - Apr 26, 2018
11297

11398
- First unfinished version, WIP

README.md

Lines changed: 56 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,25 @@
11
Tasq
22
====
33

4-
Very simple distributed Task queue that allow the scheduling of job functions to be
5-
executed on local or remote workers. Can be seen as a Proof of Concept leveraging ZMQ sockets and
6-
cloudpickle serialization capabilities as well as a very basic actor system to handle different
7-
loads of work from connecting clients.
4+
Very simple distributed Task queue that allow the scheduling of job functions
5+
to be executed on local or remote workers. Can be seen as a Proof of Concept
6+
leveraging ZMQ sockets and cloudpickle serialization capabilities as well as a
7+
very basic actor system to handle different loads of work from connecting
8+
clients. Originally it was meant to be just a brokerless job queue, recently
9+
I dove deeper on the topic and decided to add support for job persistence and
10+
extensions for Redis/RabbitMQ middlewares as well.
811

9-
Currently Tasq supports a brokerless approach through ZMQ sockets or Redis/RabbitMQ as backends.
12+
The main advantage of using a brokerless task queue, beside latencies is the
13+
lower level of complexity of the system. Additionally Tasq offer the
14+
possibility of launching and forget some workers on a network and schedule jobs
15+
to them without having them to know nothing about the code that they will run,
16+
allowing to define tasks dinamically, without stopping the workers. Obviously
17+
this approach open up more risks of malicious code to be injected to the
18+
workers, currently the only security measure is to sign serialized data passed
19+
to workers, but the entire system is meant to be used in a safe environment.
1020

11-
The main advantage of using a brokerless task queue, beside latencies is the possibility of launch
12-
and forget some workers on a network and schedule jobs to them without having them to know nothing
13-
about the code that they will run, allowing to define tasks dinamically, without stopping the
14-
workers. Obviously this approach open up more risks of malicious code to be injected to the workers,
15-
currently the only security measure is to sign serialized data passed to workers, but the entire
16-
system is meant to be used in a safe environment.
17-
18-
**NOTE:** The project is still in development stage and it's not advisable to try it in
19-
production enviroments.
21+
**NOTE:** The project is still in development stage and it's not advisable to
22+
try it in production enviroments.
2023

2124

2225

@@ -34,7 +37,7 @@ In a python shell
3437
**Using a queue object**
3538

3639
```
37-
Python 3.7.3 (default, Mar 26 2019, 21:43:19)
40+
Python 3.7.3 (default, Apr 26 2019, 21:43:19)
3841
Type 'copyright', 'credits' or 'license' for more information
3942
IPython 7.4.0 -- An enhanced Interactive Python. Type '?' for help.
4043
Warning: disable autoreload in ipython_config.py to improve performance.
@@ -92,17 +95,17 @@ Scheduling a task to be executed continously in a defined interval
9295
In [15] tq.put(fib, 5, name='8_seconds_interval_fib', eta='8s')
9396
9497
In [16] tq.put(fib, 5, name='2_hours_interval_fib', eta='2h')
95-
9698
```
99+
97100
Delayed and interval tasks are supported even in blocking scheduling manner.
98101

99-
Tasq also supports an optional static configuration file, in the `tasq.settings.py` module is
100-
defined a configuration class with some default fields. By setting the environment variable
101-
`TASQ_CONF` it is possible to configure the location of the json configuration file on the
102-
filesystem.
102+
Tasq also supports an optional static configuration file, in the
103+
`tasq.settings.py` module is defined a configuration class with some default
104+
fields. By setting the environment variable `TASQ_CONF` it is possible to
105+
configure the location of the json configuration file on the filesystem.
103106

104-
By setting the `-f` flag it is possible to also set a location of a configuration to follow on the
105-
filesystem
107+
By setting the `-c` flag it is possible to also set a location of a
108+
configuration to follow on the filesystem
106109

107110
```
108111
$ tq worker -c path/to/conf/conf.json
@@ -113,47 +116,53 @@ A worker can be started by specifying the type of sub worker we want:
113116
```
114117
$ tq rabbitmq-worker --worker-type process
115118
```
116-
Using `process` type subworker it is possible to use a distributed queue for parallel execution,
117-
usefull when the majority of the jobs are CPU bound instead of I/O bound (actors are preferable in
118-
that case).
119+
Using `process` type subworker it is possible to use a distributed queue for
120+
parallel execution, usefull when the majority of the jobs are CPU bound instead
121+
of I/O bound (actors are preferable in that case).
119122

120-
If jobs are scheduled for execution on a disconnected client, or remote workers are not up at the
121-
time of the scheduling, all jobs will be enqeued for later execution. This means that there's no
122-
need to actually start workers before job scheduling, at the first worker up all jobs will be sent
123-
and executed.
123+
If jobs are scheduled for execution on a disconnected client, or remote workers
124+
are not up at the time of the scheduling, all jobs will be enqeued for later
125+
execution. This means that there's no need to actually start workers before job
126+
scheduling, at the first worker up all jobs will be sent and executed.
124127

125128
### Security
126129

127-
Currently tasq gives the option to send pickled functions using digital sign in order to prevent
128-
manipulations of the sent payloads, being dependency-free it uses `hmac` and `hashlib` to generate
129-
digests and to verify integrity of payloads, planning to move to a better implementation probably
130-
using `pynacl` or something similar.
130+
Currently tasq gives the option to send pickled functions using digital sign in
131+
order to prevent manipulations of the sent payloads, being dependency-free it
132+
uses `hmac` and `hashlib` to generate digests and to verify integrity of
133+
payloads, planning to move to a better implementation probably using `pynacl`
134+
or something similar.
131135

132136
## Behind the scenes
133137

134-
Essentially it is possible to start workers across the nodes of a network without forming a cluster
135-
and every single node can host multiple workers by setting differents ports for the communication.
136-
Each worker, once started, support multiple connections from clients and is ready to accept tasks.
138+
Essentially it is possible to start workers across the nodes of a network
139+
without forming a cluster and every single node can host multiple workers by
140+
setting differents ports for the communication. Each worker, once started,
141+
support multiple connections from clients and is ready to accept tasks.
137142

138-
Once a worker receive a job from a client, it demand its execution to dedicated actor or process,
139-
usually selected from a pool according to a defined routing strategy in the case of actor (e.g.
140-
Round robin, Random routing or Smallest mailbox which should give a trivial indication of the
141-
workload of each actor and select the one with minimum pending tasks to execute) or using a simple
143+
Once a worker receive a job from a client, it demand its execution to dedicated
144+
actor or process, usually selected from a pool according to a defined routing
145+
strategy in the case of actor (e.g. Round robin, Random routing or Smallest
146+
mailbox which should give a trivial indication of the workload of each actor
147+
and select the one with minimum pending tasks to execute) or using a simple
142148
distributed queue across a pool of process in producer-consumer way.
143149

144150
![Tasq master-workers arch](static/worker_model_2.png)
145151

146-
Another (pool of) actor(s) is dedicated to answering the clients with the result once it is ready,
147-
this way it is possible to make the worker listening part unblocking and as fast as possible.
152+
Another (pool of) actor(s) is dedicated to answering the clients with the
153+
result once it is ready, this way it is possible to make the worker listening
154+
part unblocking and as fast as possible.
148155

149-
The reception of jobs from clients is handled by `ZMQ.PULL` socket while the response transmission
150-
handled by `ResponseActor` is served by `ZMQ.PUSH` socket, effectively forming a dual channel of
151-
communication, separating ingoing from outgoing traffic.
156+
The reception of jobs from clients is handled by `ZMQ.PULL` socket while the
157+
response transmission handled by `ResponseActor` is served by `ZMQ.PUSH`
158+
socket, effectively forming a dual channel of communication, separating ingoing
159+
from outgoing traffic.
152160

153161
## Installation
154162

155-
Being a didactical project it is not released on Pypi yet, just clone the repository and install it
156-
locally or play with it using `python -i` or `ipython`.
163+
Being a didactical project it is not released on Pypi yet, just clone the
164+
repository and install it locally or play with it using `python -i` or
165+
`ipython`.
157166

158167
```
159168
$ git clone https://github.com/codepr/tasq.git

tasq/remote/client.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,10 @@ class BaseTasqClient(metaclass=ABCMeta):
5555
:type port: int
5656
:param port: The port associated with the host param
5757
58-
:type signkey: bool or False
59-
:param signkey: Boolean flag, sign bytes passing around through sockets
60-
if True
58+
:type signkey: str or None
59+
:param signkey: String representing a sign, marks bytes passing around
60+
through sockets
61+
6162
6263
"""
6364

@@ -127,7 +128,8 @@ def connect(self):
127128
self._client.connect()
128129
self._is_connected = True
129130
# Start gathering thread
130-
self._gatherer.start()
131+
if not self._gatherer.is_alive():
132+
self._gatherer.start()
131133
# Check if there are pending requests and in case, empty the queue
132134
while self._pending:
133135
job = self._pending.pop()

0 commit comments

Comments
 (0)