-
Notifications
You must be signed in to change notification settings - Fork 387
Example code in C connector tp.h is broken #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There was a bug in reallocation function, which doesn't account a possible usage of current position (so, documentation example were usable, until the bug been introduced). Speaking about the b case: tp_reply() originally been written with an idea that it should work with supplied buffer which should already contain completely read reply. Using tp_req() it is only possible to match only single reply from buffer which were tp_init initialized with (or allocated during usage, thus it always looks at a start of a buffer). tp_reqbuf() can be used with any separate buffering scheme to handle this properly. Thank you. |
say_logger_init() zeroes the default logger object (log_default) before proceeding to logging subsystem configuration. If configuration fails for some reason (e.g. error opening the log file), the default logger will be left uninitialized, and we will crash trying to print the error to the console: #0 0x564065001af5 in print_backtrace+9 #1 0x564064f0b17f in _ZL12sig_fatal_cbi+e2 #2 0x7ff94519f0c0 in __restore_rt+0 #3 (nil) in +0 #4 0x564064ffc399 in say_default+d2 #5 0x564065011c37 in _ZNK11SystemError3logEv+6d #6 0x5640650117be in exception_log+3d #7 0x564064ff9750 in error_log+1d #8 0x564064ff9847 in diag_log+50 #9 0x564064ffab9b in say_logger_init+22a #10 0x564064f0bffb in load_cfg+69a #11 0x564064fd2f49 in _ZL13lbox_cfg_loadP9lua_State+12 #12 0x56406502258b in lj_BC_FUNCC+34 #13 0x564065045103 in lua_pcall+18e #14 0x564064fed733 in luaT_call+29 #15 0x564064fe5536 in lua_main+b9 #16 0x564064fe5d74 in run_script_f+7b5 #17 0x564064f0aef4 in _ZL16fiber_cxx_invokePFiP13__va_list_tagES0_+1e #18 0x564064fff4e5 in fiber_loop+82 #19 0x5640651a123b in coro_init+4c #20 (nil) in +4c Fix this by making say_logger_init() initialize the default logger object first and only assign it to log_default on success. See #3048
This update brings tarantool-python changes from releases 0.6.6[^1], 0.7.0 and 0.7.1 and the unreleased change [1] regarding unification of the __str__ Response method across Python 2 and Python 3. Look at the release notes [2] for the list of changes in the connector. We need [1] to obtain the same test result on different Python versions for tarantool's box-py/call.test.py test. [^1]: Except replacing of yaml.load() with yaml.safe_load() in tests, which is already here. [1]: tarantool/tarantool-python#186 [2]: https://github.com/tarantool/tarantool-python/releases Part of #20 Part of #5652
There are two import approaches in Python: import the whole module and import only needed objects (class, variable etc). We don't have code a style guide for test-run project and I personally like second approach. Moreover it seems that 'from foo import bar' is the preferred way within the test-run's code. This patch has been made by Alexander Turenko in scope of task to make code compatible with Python 3, but actually it is not related to it. Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Patch introduces a GH Actions workflow [1] and get rid of travis integration. Note that Python 3.4 is unused because it's support has been deprecated: DEPRECATION: Python 3.4 support has been deprecated. pip 19.1 will be the last one supporting it. Please upgrade your Python as Python 3.4 won't be maintained after March 2019 (cf PEP 429). So it has been removed from a matrix. 1. https://docs.github.com/en/actions/guides/building-and-testing-python Part of #20
flake8 complains regarding this line when called from Python 3, but does not complain when called from Python 2. Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
In a Python 3 'print' becomes a function, see [1]. Patch makes print calls compatible with Python 3. 1. https://docs.python.org/3/whatsnew/3.0.html#print-is-a-function Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
flake8 warn about SyntaxError E999: "leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers". Patch fixes an error. Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Introduced string and integer types in lib/utils.py like we do in tarantool-python and use these types to check object types in a code. Part of #20
In Python 3 execfile() function has been removed [1]. Instead of execfile(fn) exec() suggested. 1. https://docs.python.org/3.3/whatsnew/3.0.html?highlight=execfile#builtins Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
From What's New In Python 3.0 [1]: The StringIO and cStringIO modules are gone. Instead, import the io module and use io.StringIO or io.BytesIO for text and data respectively. 1. http://docs.python.org/3.0/whatsnew/3.0.html Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Since Python 3.2 inline comments (colon, may be in a middle of a line) are disabled by default (inline_comment_prefixes=None). Since Python 3.2 sections and options duplicates are forbidded by default (strict=True). We should enable inline comments back. Now it 'works' only because we have inline comments only within the disabled option and because parsing of the option value disregards extra values (unknown test names). It'll not work for commenting other options. That would be undesirable degradation. Note: ConfigParser (and SafeConfigParser) constructors have no inline_comment_prefixes option in Python 2, so we should pass it only on Python 3.2+. Regarding the strict mode: I think it is good to keep it enabled (I mean, don't disable explicitly) to protect ourself from mistakes. So both options are enabled by explicitly under Python 3. Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
Unlike Python 2, Python 3 does not treat `import bar` from a module in a package `foo` as importing of `foo.bar`. Absolute import that is relative to a test-run root (like `import lib.utils`) still work, because `sys.path` contains a script directory (after symlinks resolution) on both Python versions. https://www.python.org/dev/peps/pep-0328/ Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
What's New In Python 3.0 [1]: The function attributes named func_X have been renamed to use the __X__ form, freeing up these names in the function attribute namespace for user-defined attributes. To wit, func_closure, func_code, func_defaults, func_dict, func_doc, func_globals, func_name were renamed to __closure__, __code__, __defaults__, __dict__, __doc__, __globals__, __name__, respectively. 1. https://docs.python.org/3/whatsnew/3.0.html#operators-and-special-methods Part of #20
Fix Tarantool connection liveness and box-py/call.test.py test. Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
gevent 1.2 and below does not support Python 3.7 and above, see [1]. 1. gevent/gevent#1297 Part of #20
In Python 3 start method 'spawn' in multiprocessing module becomes default on Mac OS [1]. The 'spawn' method causes re-execution of some code, which is already executed in the main process. At least it is seen on the lib/__init__.py code, which removes the 'var' directory. Some other code may have side effects too, it requires investigation. The method also requires object serialization that doesn't work when objects use lambdas, whose for example used in class TestSuite (lib/test_suite.py). The latter problem is easy to fix, but the former looks more fundamental. So we stick to the 'fork' method now. The new start method is available on Python 3 only: Traceback (most recent call last): File "../../test/test-run.py", line 227, in <module> multiprocessing.set_start_method('fork') AttributeError: 'module' object has no attribute 'set_start_method' 1. https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods Fixes #265 Part of #20 Co-authored-by: Alexander Turenko <alexander.turenko@tarantool.org>
In Python 2 default string type (`<str>`) is a binary string, non-unicode. We receive data from a socket, from a Popen stream, from a file as a string and operate on those strings without any conversions. Python 3 draws a line here. We usually operate on unicode strings in the code (because this is the default string type, `<str>`), but receive bytes from a socket and a Popen stream. We can use unicode or binary streams for files (unicode by default[^1]). This commit decouples bytes and strings. In most cases it means that we convert data from bytes to a string after receiving from a socket / Popen stream and convert it back from a string to bytes before writting to a socket. Those operations are no-op on Python 2. So, the general rule for our APIs is to accept and return `<str>` disregarding Python version. Not `<bytes>`, not `<unicode>`. The only non-trivial change is around `FilteredStream` and writes into `sys.stdout`. The `FilteredStream` instance replaces `sys.stdout` during execution of a test, so it should follow the usual convention and accept `<str>` in the `write()` method. This is both intuitive and necessary, because `*.py` tests rely on `print('bla bla')` to write into a result file. However the stream should also accept `<bytes>`, because we have a unit test (`unit/json.test`), which produces a binary output, which does not conform UTF-8 encoding. The separate `write_bytes()` method was introduced for this sake. UnittestServer and AppServer write tests output as bytes directly, TarantoolServer rely on the usual string output. We also use bytes directly, when write from one stream to another one: in `app_server.py` for stderr (writting to a log file), in `tarantool_server.py` for log destination property (because it is the destination for Popen). [^1]: Technically it depends on a Python version and a system locale. There are situations, when default text file streams encoding is not 'utf-8'. They will be handled in the next commit. Part of #20 Co-authored-by: Sergey Bronnikov <sergeyb@tarantool.org>
The problem: test files, result files contain UTF-8 symbols, which are out of ASCII range. Just `open(file, 'r').read()` without the encoding='utf-8' argument fails to decode them, when the default encoding for file text streams is not 'utf-8'. We meet this situation on Python 3.6.8 (provided by CentOS 7 and CentOS 8), when the POSIX locale is set (`LC_ALL=C`). The solution is described in the code comment: replace `open()` built-in function and always set `encoding='utf-8'`. That's hacky way, but it looks better than change every `open()` call across the code and don't forget to do that in all future code (and keep Python 2 compatibility in the mind). But maybe we'll revisit the approach later. There is another way to hack the `open()` behaviour that works for me on Python 3.6.8: | import _bootlocale | _bootlocale.getpreferredencoding = (lambda *args: 'utf8') However it leans on Python internals and looks less reliable than the implemented one. Part of #20
When Tarantool unit tests runs using test-run error with output like below may happen: [027] small/small_class.test [027] Test.run() received the following error: [027] Traceback (most recent call last): [027] File "/__w/tarantool/tarantool/test-run/lib/test.py", line 192, in run [027] self.execute(server) [027] File "/__w/tarantool/tarantool/test-run/lib/unittest_server.py", line 20, in execute [027] proc = Popen(execs, cwd=server.vardir, stdout=PIPE, stderr=STDOUT) [027] File "/usr/lib/python3.5/subprocess.py", line 676, in __init__ [027] restore_signals, start_new_session) [027] File "/usr/lib/python3.5/subprocess.py", line 1282, in _execute_child [027] raise child_exception_type(errno_num, err_msg) [027] FileNotFoundError: [Errno 2] No such file or directory: '../test/small/small_class.test' [027] [ fail ] The root cause of error is changed behaviour of Popen in Python 3 in comparison to Python 2. One should explicitly set path to a current work dir: Python 2 [1]: "If cwd is not None, the child’s current directory will be changed to cwd before it is executed. Note that this directory is not considered when searching the executable, so you can’t specify the program’s path relative to cwd." Python 3 [2]: "If cwd is not None, the function changes the working directory to cwd before executing the child. <...> In particular, the function looks for executable (or for the first item in args) relative to cwd if the executable path is a relative path." 1. https://docs.python.org/2/library/subprocess.html#subprocess.Popen 2. https://docs.python.org/3/library/subprocess.html#subprocess.Popen Part of #20
To make test box-py/call.test.py compatible with Python 2 and Python 3 it must have the same output when running with both versions of interpeter. With running on Python 2.7 output of called commands concluded to round brackets while on Python 3 it is not. [001] @@ -20,7 +20,7 @@ [001] - true [001] - null [001] ... [001] -call f1 () [001] +('call ', 'f1', ((),)) [001] - 'testing' [001] - 1 [001] - False To fix it print should use formatted string in print() call. Part of #20
Related issues and changes: - Support Python 3 in Tarantool test suite #5538 - Python 3 in tarantool-python tarantool/tarantool-python#181 tarantool/tarantool-python#186 - Use Python 3 in a test infrastructure #5652 Closes #20
Since Python 3.6+ in json.loads() object that should be deserialized can be of type bytes or bytearray. The input encoding should be UTF-8, UTF-16 or UTF-32 [1]. Patch follows up commit 395edeb ('python3: decouple bytes and strings') and it is a part of task with switching to Python 3. 1. https://docs.python.org/3/library/json.html Follows up: #20
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for speeding up tuple field lookup in `tuple_field_raw_by_part()`. These structure members are accessed and updated without any locks, assuming this code is executed exclusively in the tx thread. However, this isn't necessarily true because we also perform tuple field lookups in vinyl read threads. Apparently, this can result in unexpected races and bugs, for example: ``` tarantool#1 0x590be9f7eb6d in crash_collect+256 tarantool#2 0x590be9f7f5a9 in crash_signal_cb+100 tarantool#3 0x72b111642520 in __sigaction+80 tarantool#4 0x590bea385e3c in load_u32+35 tarantool#5 0x590bea231eba in field_map_get_offset+46 tarantool#6 0x590bea23242a in tuple_field_raw_by_path+417 tarantool#7 0x590bea23282b in tuple_field_raw_by_part+203 tarantool#8 0x590bea23288c in tuple_field_by_part+91 tarantool#9 0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103 tarantool#10 0x590be9d4fba3 in tuple_hint+40 tarantool#11 0x590be9d50acf in vy_stmt_hint+178 tarantool#12 0x590be9d53531 in vy_page_stmt+168 tarantool#13 0x590be9d535ea in vy_page_find_key+142 tarantool#14 0x590be9d545e6 in vy_page_read_cb+210 tarantool#15 0x590be9f94ef0 in cbus_call_perform+44 tarantool#16 0x590be9f94eae in cmsg_deliver+52 tarantool#17 0x590be9f9583e in cbus_process+100 tarantool#18 0x590be9f958a5 in cbus_loop+28 tarantool#19 0x590be9d512da in vy_run_reader_f+381 tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34 tarantool#21 0x590be9f8b697 in fiber_loop+219 tarantool#22 0x590bea374bb6 in coro_init+120 ``` Fix this by skipping this optimization for threads other than tx. No test is added because reproducing this race is tricky. Ideally, bugs like this one should be caught by fuzzing tests or thread sanitizers. Closes tarantool#10123 NO_DOC=bug fix NO_TEST=tested manually with fuzzer
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for speeding up tuple field lookup in `tuple_field_raw_by_part()`. These structure members are accessed and updated without any locks, assuming this code is executed exclusively in the tx thread. However, this isn't necessarily true because we also perform tuple field lookups in vinyl read threads. Apparently, this can result in unexpected races and bugs, for example: ``` tarantool#1 0x590be9f7eb6d in crash_collect+256 tarantool#2 0x590be9f7f5a9 in crash_signal_cb+100 tarantool#3 0x72b111642520 in __sigaction+80 tarantool#4 0x590bea385e3c in load_u32+35 tarantool#5 0x590bea231eba in field_map_get_offset+46 tarantool#6 0x590bea23242a in tuple_field_raw_by_path+417 tarantool#7 0x590bea23282b in tuple_field_raw_by_part+203 tarantool#8 0x590bea23288c in tuple_field_by_part+91 tarantool#9 0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103 tarantool#10 0x590be9d4fba3 in tuple_hint+40 tarantool#11 0x590be9d50acf in vy_stmt_hint+178 tarantool#12 0x590be9d53531 in vy_page_stmt+168 tarantool#13 0x590be9d535ea in vy_page_find_key+142 tarantool#14 0x590be9d545e6 in vy_page_read_cb+210 tarantool#15 0x590be9f94ef0 in cbus_call_perform+44 tarantool#16 0x590be9f94eae in cmsg_deliver+52 tarantool#17 0x590be9f9583e in cbus_process+100 tarantool#18 0x590be9f958a5 in cbus_loop+28 tarantool#19 0x590be9d512da in vy_run_reader_f+381 tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34 tarantool#21 0x590be9f8b697 in fiber_loop+219 tarantool#22 0x590bea374bb6 in coro_init+120 ``` Fix this by skipping this optimization for threads other than tx. No test is added because reproducing this race is tricky. Ideally, bugs like this one should be caught by fuzzing tests or thread sanitizers. Closes tarantool#10123 NO_DOC=bug fix NO_TEST=tested manually with fuzzer
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for speeding up tuple field lookup in `tuple_field_raw_by_part()`. These structure members are accessed and updated without any locks, assuming this code is executed exclusively in the tx thread. However, this isn't necessarily true because we also perform tuple field lookups in vinyl read threads. Apparently, this can result in unexpected races and bugs, for example: ``` #1 0x590be9f7eb6d in crash_collect+256 #2 0x590be9f7f5a9 in crash_signal_cb+100 #3 0x72b111642520 in __sigaction+80 #4 0x590bea385e3c in load_u32+35 #5 0x590bea231eba in field_map_get_offset+46 #6 0x590bea23242a in tuple_field_raw_by_path+417 #7 0x590bea23282b in tuple_field_raw_by_part+203 #8 0x590bea23288c in tuple_field_by_part+91 #9 0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103 #10 0x590be9d4fba3 in tuple_hint+40 #11 0x590be9d50acf in vy_stmt_hint+178 #12 0x590be9d53531 in vy_page_stmt+168 #13 0x590be9d535ea in vy_page_find_key+142 #14 0x590be9d545e6 in vy_page_read_cb+210 #15 0x590be9f94ef0 in cbus_call_perform+44 #16 0x590be9f94eae in cmsg_deliver+52 #17 0x590be9f9583e in cbus_process+100 #18 0x590be9f958a5 in cbus_loop+28 #19 0x590be9d512da in vy_run_reader_f+381 #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34 #21 0x590be9f8b697 in fiber_loop+219 #22 0x590bea374 6DB6 bb6 in coro_init+120 ``` Fix this by skipping this optimization for threads other than tx. No test is added because reproducing this race is tricky. Ideally, bugs like this one should be caught by fuzzing tests or thread sanitizers. Closes #10123 NO_DOC=bug fix NO_TEST=tested manually with fuzzer
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for speeding up tuple field lookup in `tuple_field_raw_by_part()`. These structure members are accessed and updated without any locks, assuming this code is executed exclusively in the tx thread. However, this isn't necessarily true because we also perform tuple field lookups in vinyl read threads. Apparently, this can result in unexpected races and bugs, for example: ``` #1 0x590be9f7eb6d in crash_collect+256 #2 0x590be9f7f5a9 in crash_signal_cb+100 #3 0x72b111642520 in __sigaction+80 #4 0x590bea385e3c in load_u32+35 #5 0x590bea231eba in field_map_get_offset+46 #6 0x590bea23242a in tuple_field_raw_by_path+417 #7 0x590bea23282b in tuple_field_raw_by_part+203 #8 0x590bea23288c in tuple_field_by_part+91 #9 0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103 #10 0x590be9d4fba3 in tuple_hint+40 #11 0x590be9d50acf in vy_stmt_hint+178 #12 0x590be9d53531 in vy_page_stmt+168 #13 0x590be9d535ea in vy_page_find_key+142 #14 0x590be9d545e6 in vy_page_read_cb+210 #15 0x590be9f94ef0 in cbus_call_perform+44 #16 0x590be9f94eae in cmsg_deliver+52 #17 0x590be9f9583e in cbus_process+100 #18 0x590be9f958a5 in cbus_loop+28 #19 0x590be9d512da in vy_run_reader_f+381 #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34 #21 0x590be9f8b697 in fiber_loop+219 #22 0x590bea374bb6 in coro_init+120 ``` Fix this by skipping this optimization for threads other than tx. No test is added because reproducing this race is tricky. Ideally, bugs like this one should be caught by fuzzing tests or thread sanitizers. Closes #10123 NO_DOC=bug fix NO_TEST=tested manually with fuzzer (cherry picked from commit 19d1f1c)
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for speeding up tuple field lookup in `tuple_field_raw_by_part()`. These structure members are accessed and updated without any locks, assuming this code is executed exclusively in the tx thread. However, this isn't necessarily true because we also perform tuple field lookups in vinyl read threads. Apparently, this can result in unexpected races and bugs, for example: ``` #1 0x590be9f7eb6d in crash_collect+256 #2 0x590be9f7f5a9 in crash_signal_cb+100 #3 0x72b111642520 in __sigaction+80 #4 0x590bea385e3c in load_u32+35 #5 0x590bea231eba in field_map_get_offset+46 #6 0x590bea23242a in tuple_field_raw_by_path+417 #7 0x590bea23282b in tuple_field_raw_by_part+203 #8 0x590bea23288c in tuple_field_by_part+91 #9 0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, 9E12 false, false>(tuple*, key_def*)+103 #10 0x590be9d4fba3 in tuple_hint+40 #11 0x590be9d50acf in vy_stmt_hint+178 #12 0x590be9d53531 in vy_page_stmt+168 #13 0x590be9d535ea in vy_page_find_key+142 #14 0x590be9d545e6 in vy_page_read_cb+210 #15 0x590be9f94ef0 in cbus_call_perform+44 #16 0x590be9f94eae in cmsg_deliver+52 #17 0x590be9f9583e in cbus_process+100 #18 0x590be9f958a5 in cbus_loop+28 #19 0x590be9d512da in vy_run_reader_f+381 #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34 #21 0x590be9f8b697 in fiber_loop+219 #22 0x590bea374bb6 in coro_init+120 ``` Fix this by skipping this optimization for threads other than tx. No test is added because reproducing this race is tricky. Ideally, bugs like this one should be caught by fuzzing tests or thread sanitizers. Closes #10123 NO_DOC=bug fix NO_TEST=tested manually with fuzzer (cherry picked from commit 19d1f1c)
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally (cherry picked from commit fbe6d0a)
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finali 10000 zer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally (cherry picked from commit fbe6d0a)
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally (cherry picked from commit fbe6d0a)
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> #4 __assert_fail #5 curl_multi_assign // <- called by us #6 curl_multi_sock_cb // <- this is our callback #7 Curl_multi_pollset_ev #8 cpool_update_shutdown_ev #9 cpool_discard_conn #10 cpool_close_and_destroy_all #11 Curl_cpool_destroy #12 curl_multi_cleanup #13 curl_env_finish // <- destroy the multi handle #14 httpc_env_finish #15 luaT_httpc_cleanup #16 lj_BC_FUNCC #17 gc_call_finalizer #18 gc_finalize #19 gc_onestep #20 lj_gc_fullgc #21 lua_gc #22 lj_cf_collectgarbage #23 lj_BC_FUNCC #24 lua_pcall #25 luaT_call #26 lua_main #27 run_script_f #28 fiber_cxx_invoke #29 fiber_loop #30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally (cherry picked from commit fbe6d0a)
The reason is that the previous libcurl submodule update in commit 0919f39 ("third_party: update libcurl from 8.8.0 to 8.10.1") reveals the following regression: NOWRAP ```c $ tarantool -e "require('http.client').new():get('https://google.com') collectgarbage()" tarantool: ./third_party/curl/lib/multi.c:3691: curl_multi_assign: Assertion `!(multi)' failed. Aborted (core dumped) ``` NOWRAP The stacktrace is the following: NOWRAP ```c <...> tarantool#4 __assert_fail tarantool#5 curl_multi_assign // <- called by us tarantool#6 curl_multi_sock_cb // <- this is our callback tarantool#7 Curl_multi_pollset_ev tarantool#8 cpool_update_shutdown_ev tarantool#9 cpool_discard_conn tarantool#10 cpool_close_and_destroy_all tarantool#11 Curl_cpool_destroy tarantool#12 curl_multi_cleanup tarantool#13 curl_env_finish // <- destroy the multi handle tarantool#14 httpc_env_finish tarantool#15 luaT_httpc_cleanup tarantool#16 lj_BC_FUNCC tarantool#17 gc_call_finalizer tarantool#18 gc_finalize tarantool#19 gc_onestep tarantool#20 lj_gc_fullgc tarantool#21 lua_gc tarantool#22 lj_cf_collectgarbage tarantool#23 lj_BC_FUNCC tarantool#24 lua_pcall tarantool#25 luaT_call tarantool#26 lua_main tarantool#27 run_script_f tarantool#28 fiber_cxx_invoke tarantool#29 fiber_loop tarantool#30 coro_init ``` NOWRAP The multi handle is during the destroy, but our `CURLMOPT_SOCKETFUNCTION` callback is invoked and the `curl_multi_assign()` call (invoked to associate a libev watcher to the given file descriptor) fails on the assertion. Everything is as described in curl/curl#15201. The first bad libcurl's commit is [curl-8_10_0-4-g48f61e781][1], but later it was fixed in [curl-8_10_1-241-g461ce6c61][2]. This commit updates libcurl to this revision to fix the regression. Adjusted build options in our build script: * Added `CURL_DISABLE_IPFS=ON`: [curl-8_10_1-57-gce7d0d413][3] * Added `CURL_TEST_BUNDLES=OFF`: [curl-8_10_1-67-g71cf0d1fc][4] * Changed `ENABLE_WEBSOCKETS=OFF` to `CURL_DISABLE_WEBSOCKETS=ON`: [curl-8_10_1-130-gd78e129d5][5] [1]: curl/curl@48f61e7 [2]: curl/curl@461ce6c [3]: curl/curl@ce7d0d4 [4]: curl/curl@71cf0d1 [5]: curl/curl@d78e129 NO_DOC=bugfix NO_CHANGELOG=fixes an unreleased commit NO_TEST=can't reproduce without https to add a test case, verified locally
See the test program at https://gist.github.com/IlyaM/5715139 which is based on documentation in comments in tp.h. The test program does a query in space 0 by string key "0e72ae1a-d0be-4e49-aeb9-aebea074363c" . If the test database does contain some tuple with such key then the program ends up hanging waiting for a reply from tarantool. The underlying bug is caused by wrong call to tp_ensure inside of while loop which is written exactly as suggested in the documentation. In such program tp.h ends up allocating insufficient memory for the next read from network what causes the program:
a) to potentially corrupt memory because read() call reads more bytes then was allocated in the buffer;
b) to set internal pointers in tp struct such that the code in while loop thinks it needs to read more data from network when in fact the whole reply have been read.
If I change tp_ensure line to the following then the code does seem to work correctly:
What suggests that the example in the documentation is wrong.
P.S. Also I'd change example in comments for tp_select to pass non-zero value for limit parameter. I've spent significant time trying to understand why I'm not getting any results because of this.
The text was updated successfully, but these errors were encountered: