Skip to content

Commit eb426cb

Browse files
committed
Added some documentation fixes
1 parent 0efd0c7 commit eb426cb

File tree

7 files changed

+158
-39
lines changed

7 files changed

+158
-39
lines changed

LICENSE

+5-5
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
Apache License
1+
2+
Apache License
23
Version 2.0, January 2004
34
http://www.apache.org/licenses/
45

@@ -178,15 +179,15 @@ Apache License
178179
APPENDIX: How to apply the Apache License to your work.
179180

180181
To apply the Apache License to your work, attach the following
181-
boilerplate notice, with the fields enclosed by brackets "{}"
182+
boilerplate notice, with the fields enclosed by brackets "[]"
182183
replaced with your own identifying information. (Don't include
183184
the brackets!) The text should be enclosed in the appropriate
184185
comment syntax for the file format. We also recommend that a
185186
file or class name and description of purpose be included on the
186187
same "printed page" as the copyright notice for easier
187188
identification within third-party archives.
188189

189-
Copyright {yyyy} {name of copyright owner}
190+
Copyright 2016 Taras Voinarovskyi
190191

191192
Licensed under the Apache License, Version 2.0 (the "License");
192193
you may not use this file except in compliance with the License.
@@ -198,5 +199,4 @@ Apache License
198199
distributed under the License is distributed on an "AS IS" BASIS,
199200
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200201
See the License for the specific language governing permissions and
201-
limitations under the License.
202-
202+
limitations under the License.

aiokafka/consumer.py

+47-6
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,12 @@ class AIOKafkaConsumer(object):
109109
AIOKafkaConsumer supports Kafka API versions >=0.9 only.
110110
If set to 'auto', will attempt to infer the broker version by
111111
probing various APIs. Default: auto
112+
113+
114+
Note:
115+
Many configuration parameters are taken from Java Client:
116+
https://kafka.apache.org/documentation.html#newconsumerconfigs
117+
112118
"""
113119
def __init__(self, *topics, loop,
114120
bootstrap_servers='localhost',
@@ -421,8 +427,9 @@ def seek(self, partition, offset):
421427
@asyncio.coroutine
422428
def seek_to_committed(self, *partitions):
423429
"""Seek to the committed offset for partitions
430+
424431
Arguments:
425-
*partitions: optionally provide specific TopicPartitions,
432+
partitions: optionally provide specific TopicPartitions,
426433
otherwise default to all assigned partitions
427434
428435
Raises:
@@ -545,16 +552,36 @@ def _on_change_subscription(self):
545552
def getone(self, *partitions):
546553
"""
547554
Get one message from Kafka
548-
If no new messages occured, this method will wait it
555+
If no new messages prefetched, this method will wait for it
549556
550557
Arguments:
551-
partitions (List[TopicPartition]): The partitions that need
552-
fetching message. If no one partition specified then all
553-
subscribed partitions will be used
558+
partitions (List[TopicPartition]): Optional list of partitions to
559+
return from. If no partitions specified then returned message
560+
will be from any partition, which consumer is subscribed to.
554561
555562
Returns:
556-
instance of collections.namedtuple("ConsumerRecord",
563+
ConsumerRecord
564+
565+
Will return instance of
566+
567+
.. code:: python
568+
569+
collections.namedtuple(
570+
"ConsumerRecord",
557571
["topic", "partition", "offset", "key", "value"])
572+
573+
Example usage:
574+
575+
576+
.. code:: python
577+
578+
while True:
579+
message = yield from consumer.getone()
580+
topic = message.topic
581+
partition = message.partition
582+
# Process message
583+
print(message.offset, message.key, message.value)
584+
558585
"""
559586
assert all(map(lambda k: isinstance(k, TopicPartition), partitions))
560587

@@ -580,6 +607,20 @@ def getmany(self, *partitions, timeout_ms=0):
580607
Returns:
581608
dict: topic to list of records since the last fetch for the
582609
subscribed list of topics and partitions
610+
611+
Example usage:
612+
613+
614+
.. code:: python
615+
616+
data = yield from consumer.getmany()
617+
for tp, messages in data.items():
618+
topic = tp.topic
619+
partition = tp.partition
620+
for message in messages:
621+
# Process message
622+
print(message.offset, message.key, message.value)
623+
583624
"""
584625
assert all(map(lambda k: isinstance(k, TopicPartition), partitions))
585626

aiokafka/producer.py

+8-6
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ class AIOKafkaProducer(object):
2222
"""A Kafka client that publishes records to the Kafka cluster.
2323
2424
The producer consists of a pool of buffer space that holds records that
25-
haven't yet been transmitted to the server as well as a background I/O
26-
thread that is responsible for turning these records into requests and
25+
haven't yet been transmitted to the server as well as a background task
26+
that is responsible for turning these records into requests and
2727
transmitting them to the cluster.
2828
2929
The send() method is asynchronous. When called it adds the record to a
@@ -58,7 +58,8 @@ class AIOKafkaProducer(object):
5858
the leader to have received before considering a request complete.
5959
This controls the durability of records that are sent. The
6060
following settings are common:
61-
0: Producer will not wait for any acknowledgment from the server
61+
62+
0: Producer will not wait for any acknowledgment from the server
6263
at all. The message will immediately be added to the socket
6364
buffer and considered sent. No guarantee can be made that the
6465
server has received the record in this case, and the retries
@@ -74,6 +75,7 @@ class AIOKafkaProducer(object):
7475
replicas to acknowledge the record. This guarantees that the
7576
record will not be lost as long as at least one in-sync replica
7677
remains alive. This is the strongest available guarantee.
78+
7779
If unset, defaults to acks=1.
7880
compression_type (str): The compression type for all data generated by
7981
the producer. Valid values are 'gzip', 'snappy', 'lz4', or None.
@@ -110,8 +112,8 @@ class AIOKafkaProducer(object):
110112
probing various APIs. Default: auto
111113
112114
Note:
113-
Configuration parameters are described in more detail at
114-
https://kafka.apache.org/090/configuration.html#producerconfigs
115+
Many configuration parameters are taken from Java Client:
116+
https://kafka.apache.org/documentation.html#producerconfigs
115117
"""
116118
_PRODUCER_CLIENT_ID_SEQUENCE = 0
117119

@@ -175,7 +177,7 @@ def start(self):
175177

176178
@asyncio.coroutine
177179
def stop(self):
178-
"""Close all connections to kafka cluser"""
180+
"""Flush all pending data and close all connections to kafka cluser"""
179181
if self._closed:
180182
return
181183

docs/conf.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,8 @@ def get_version(release):
5757
'sphinx.ext.autodoc',
5858
'sphinx.ext.intersphinx',
5959
'sphinx.ext.viewcode',
60-
'sphinxcontrib.asyncio'
60+
'sphinxcontrib.asyncio',
61+
'sphinx.ext.napoleon'
6162
]
6263

6364
intersphinx_mapping = {'python': ('http://docs.python.org/3', None)}

docs/group_consumer.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11

22
Example. Group consumer
3-
======================================
3+
=======================
44

55
Producer:
66

docs/index.rst

+93-20
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,20 @@
11
Welcome to aiokafka's documentation!
22
====================================
33

4+
.. _GitHub: https://github.com/aio-libs/aiokafka
5+
.. _kafka-python: https://github.com/dpkp/kafka-python
46
.. _asyncio: http://docs.python.org/3.4/library/asyncio.html
57

68
**aiokafka** is a client for the Apache Kafka distributed stream processing system using the asyncio_.
9+
It is based on kafka-python_ library and reuses it's internals for protocol parsing, errors, etc.
710
Client is designed to function much like the official java client, with a sprinkling of pythonic interfaces.
811

912
**aiokafka** is used with 0.9 Kafka brokers and supports fully coordinated consumer groups -- i.e., dynamic
1013
partition assignment to multiple consumers in the same group.
1114

1215

13-
Installation
14-
============
15-
16-
.. code::
17-
18-
pip3 install aiokafka
19-
20-
.. note:: *aiokafka* requires *python-kafka* library.
21-
22-
2316
Getting started
24-
===============
17+
---------------
2518

2619

2720
AIOKafkaConsumer
@@ -50,7 +43,8 @@ See consumer example:
5043
print("error while consuming message: ", err)
5144
5245
loop = asyncio.get_event_loop()
53-
consumer = AIOKafkaConsumer('topic1', 'topic2', loop=loop, bootstrap_servers='localhost:1234')
46+
consumer = AIOKafkaConsumer(
47+
'topic1', 'topic2', loop=loop, bootstrap_servers='localhost:1234')
5448
loop.run_until_complete(consumer.start())
5549
c_task = asyncio.async(consume_task(consumer))
5650
try:
@@ -88,16 +82,95 @@ See producer example:
8882
loop.close()
8983
9084
91-
Compression
92-
===========
85+
Installation
86+
------------
87+
88+
.. code::
89+
90+
pip3 install aiokafka
91+
92+
.. note:: *aiokafka* requires *python-kafka* library and heavily depands on it.
93+
94+
95+
Optional LZ4 install
96+
++++++++++++++++++++
97+
98+
To enable LZ4 compression/decompression, install lz4tools and xxhash:
99+
100+
>>> pip3 install lz4tools
101+
>>> pip3 install xxhash
102+
103+
104+
Optional Snappy install
105+
+++++++++++++++++++++++
106+
107+
1. Download and build Snappy from http://code.google.com/p/snappy/downloads/list
108+
109+
Ubuntu:
110+
111+
.. code:: bash
112+
113+
apt-get install libsnappy-dev
93114
94-
aiokafka supports gzip compression/decompression natively. To produce or
95-
consume lz4 compressed messages, you must install lz4tools and xxhash.
96-
To enable snappy, install python-snappy (also requires snappy library).
115+
OSX:
116+
117+
.. code:: bash
118+
119+
brew install snappy
120+
121+
From Source:
122+
123+
.. code:: bash
124+
125+
wget http://snappy.googlecode.com/files/snappy-1.0.5.tar.gz
126+
tar xzvf snappy-1.0.5.tar.gz
127+
cd snappy-1.0.5
128+
./configure
129+
make
130+
sudo make install
131+
132+
133+
2. Install the `python-snappy` module
134+
135+
.. code:: bash
136+
137+
pip3 install python-snappy
138+
139+
140+
141+
Source code
142+
-----------
143+
144+
The project is hosted on GitHub_
145+
146+
Please feel free to file an issue on `bug tracker
147+
<https://github.com/aio-libs/aiokafka/issues>`_ if you have found a bug
148+
or have some suggestion for library improvement.
149+
150+
The library uses `Travis <https://travis-ci.org/aio-libs/aiokafka>`_ for
151+
Continious Integration.
152+
153+
154+
Authors and License
155+
-------------------
156+
157+
The ``aiokafka`` package is Apache 2 licensed and freely available.
158+
159+
Feel free to improve this package and send a pull request to GitHub_.
160+
161+
162+
Contents:
97163

98164
.. toctree::
99-
:hidden:
100165
:maxdepth: 2
101166

102-
API documentation <api>
103-
Examples <examples>
167+
api
168+
examples
169+
170+
171+
Indices and tables
172+
==================
173+
174+
* :ref:`genindex`
175+
* :ref:`modindex`
176+
* :ref:`search`

docs/manual_commit.rst

+2
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
Example. Manual commit
33
======================
44

5+
When processing data from several
6+
57

68
Consumer:
79

0 commit comments

Comments
 (0)