Skip to content

Commit

Permalink
Fix Doxygen warnings
Browse files Browse the repository at this point in the history
  • Loading branch information
edenhill committed Sep 4, 2019
1 parent bd71c96 commit 58c4b9b
Show file tree
Hide file tree
Showing 4 changed files with 154 additions and 58 deletions.
8 changes: 5 additions & 3 deletions Doxyfile
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,8 @@ TAB_SIZE = 4

ALIASES = "locality=@par Thread restriction:"
ALIASES += "locks=@par Lock restriction:"
# Automatically escape @REALM in CONFIGURATION.md
ALIASES += "REALM=\@REALM"

# This tag can be used to specify a number of word-keyword mappings (TCL only).
# A mapping has the form "name=value". For example adding "class=itcl::class"
Expand Down Expand Up @@ -699,7 +701,7 @@ CITE_BIB_FILES =
# messages are off.
# The default value is: NO.

QUIET = NO
QUIET = YES

# The WARNINGS tag can be used to turn on/off the warning messages that are
# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
Expand Down Expand Up @@ -992,15 +994,15 @@ VERBATIM_HEADERS = YES
# compiled with the --with-libclang option.
# The default value is: NO.

CLANG_ASSISTED_PARSING = NO
#CLANG_ASSISTED_PARSING = NO

# If clang assisted parsing is enabled you can provide the compiler with command
# line options that you would normally use when invoking the compiler. Note that
# the include paths will already be set by doxygen for the files and directories
# specified with INPUT and INCLUDE_PATH.
# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.

CLANG_OPTIONS =
#CLANG_OPTIONS =

#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
Expand Down
10 changes: 5 additions & 5 deletions INTRODUCTION.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ configuration (`request.required.acks` and `message.send.max.retries`, etc).

If the topic configuration property `request.required.acks` is set to wait
for message commit acknowledgements from brokers (any value but 0, see
[`CONFIGURATION.md`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)
[`CONFIGURATION.md`](CONFIGURATION.md)
for specifics) then librdkafka will hold on to the message until
all expected acks have been received, gracefully handling the following events:

Expand Down Expand Up @@ -404,7 +404,7 @@ and exactly-once producer guarantees.
The idempotent producer is enabled by setting the `enable.idempotence`
configuration property to `true`, this will automatically adjust a number of
other configuration properties to adhere to the idempotency requirements,
see the documentation of `enable.idempotence` in [CONFIGURATION.md] for
see the documentation of `enable.idempotence` in [CONFIGURATION.md](CONFIGURATION.md) for
more information.
Producer instantiation will fail if the user supplied an incompatible value
for any of the automatically adjusted properties, e.g., it is an error to
Expand Down Expand Up @@ -698,9 +698,9 @@ This method should be called by the application on delivery report error.
### Documentation

The librdkafka API is documented in the
[`rdkafka.h`](https://github.com/edenhill/librdkafka/blob/master/src/rdkafka.h)
[`rdkafka.h`](src/rdkafka.h)
header file, the configuration properties are documented in
[`CONFIGURATION.md`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)
[`CONFIGURATION.md`](CONFIGURATION.md)

### Initialization

Expand All @@ -717,7 +717,7 @@ It is created by calling `rd_kafka_topic_new()`.
Both `rd_kafka_t` and `rd_kafka_topic_t` comes with a configuration API which
is optional.
Not using the API will cause librdkafka to use its default values which are
documented in [`CONFIGURATION.md`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).
documented in [`CONFIGURATION.md`](CONFIGURATION.md).

**Note**: An application may create multiple `rd_kafka_t` objects and
they share no state.
Expand Down
37 changes: 25 additions & 12 deletions src-cpp/rdkafkacpp.h
Original file line number Diff line number Diff line change
Expand Up @@ -271,7 +271,7 @@ enum ErrorCode {
ERR__PURGE_QUEUE = -152,
/** Purged in flight */
ERR__PURGE_INFLIGHT = -151,
/** Fatal error: see ::fatal_error() */
/** Fatal error: see RdKafka::Handle::fatal_error() */
ERR__FATAL = -150,
/** Inconsistent state */
ERR__INCONSISTENT = -149,
Expand Down Expand Up @@ -887,7 +887,7 @@ class RD_EXPORT SslCertificateVerifyCb {
* The application may set the SSL context error code by returning 0
* from the verify callback and providing a non-zero SSL context error code
* in \p x509_error.
* If the verify callback sets \x509_error to 0, returns 1, and the
* If the verify callback sets \p x509_error to 0, returns 1, and the
* original \p x509_error was non-zero, the error on the SSL context will
* be cleared.
* \p x509_error is always a valid pointer to an int.
Expand Down Expand Up @@ -1429,11 +1429,11 @@ class RD_EXPORT Handle {
virtual ErrorCode set_log_queue (Queue *queue) = 0;

/**
* @brief Cancels the current callback dispatcher (Producer::poll(),
* Consumer::poll(), KafkaConsumer::consume(), etc).
* @brief Cancels the current callback dispatcher (Handle::poll(),
* KafkaConsumer::consume(), etc).
*
* A callback may use this to force an immediate return to the calling
* code (caller of e.g. ..::poll()) without processing any further
* code (caller of e.g. Handle::poll()) without processing any further
* events.
*
* @remark This function MUST ONLY be called from within a
Expand Down Expand Up @@ -1603,12 +1603,18 @@ class RD_EXPORT Handle {
class RD_EXPORT TopicPartition {
public:
/**
* Create topic+partition object for \p topic and \p partition
* and optionally \p offset.
* @brief Create topic+partition object for \p topic and \p partition.
*
* Use \c delete to deconstruct.
*/
static TopicPartition *create (const std::string &topic, int partition);

/**
* @brief Create topic+partition object for \p topic and \p partition
* with offset \p offset.
*
* Use \c delete to deconstruct.
*/
static TopicPartition *create (const std::string &topic, int partition,
int64_t offset);

Expand Down Expand Up @@ -1739,6 +1745,7 @@ class RD_EXPORT Topic {

class RD_EXPORT MessageTimestamp {
public:
/*! Message timestamp type */
enum MessageTimestampType {
MSG_TIMESTAMP_NOT_AVAILABLE, /**< Timestamp not available */
MSG_TIMESTAMP_CREATE_TIME, /**< Message creation time (source) */
Expand Down Expand Up @@ -1815,13 +1822,18 @@ class RD_EXPORT Headers {
/**
* @brief Copy constructor
*
* @param other other Header used for the copy constructor
* @param other Header to make a copy of.
*/
Header(const Header &other):
key_(other.key_), err_(other.err_), value_size_(other.value_size_) {
value_ = copy_value(other.value_, value_size_);
}

/**
* @brief Assignment operator
*
* @param other Header to make a copy of.
*/
Header& operator=(const Header &other)
{
if (&other == this) {
Expand Down Expand Up @@ -1900,8 +1912,8 @@ class RD_EXPORT Headers {
/**
* @brief Create a new instance of the Headers object from a std::vector
*
* @params headers std::vector of RdKafka::Headers::Header objects.
* The headers are copied, not referenced.
* @param headers std::vector of RdKafka::Headers::Header objects.
* The headers are copied, not referenced.
*
* @returns a Headers list from std::vector set to the size of the std::vector
*/
Expand Down Expand Up @@ -2842,7 +2854,8 @@ class RD_EXPORT Producer : public virtual Handle {
* to make sure all queued and in-flight produce requests are completed
* before terminating.
*
* @remark This function will call poll() and thus trigger callbacks.
* @remark This function will call Producer::poll() and thus
* trigger callbacks.
*
* @returns ERR__TIMED_OUT if \p timeout_ms was reached before all
* outstanding requests were completed, else ERR_NO_ERROR
Expand All @@ -2855,7 +2868,7 @@ class RD_EXPORT Producer : public virtual Handle {
*
* @param purge_flags tells which messages should be purged and how.
*
* The application will need to call ::poll() or ::flush()
* The application will need to call Handle::poll() or Producer::flush()
* afterwards to serve the delivery report callbacks of the purged messages.
*
* Messages purged from internal queues fail with the delivery report
Expand Down
Loading

0 comments on commit 58c4b9b

Please sign in to comment.