Async IO

Hii,

Do we have AsyncIO / Event loop (just like boost io context) in chapel
Usecase is using tcp sockets in async manner, don’t want to use multi-threading.

Hi @rivalq, welcome to Chapel Discourse!

Are you thinking of something like what's proposed in the following issue: [Feature Request]: extensions to select syntax for go-style channels and sockets · Issue #26993 · chapel-lang/chapel · GitHub If so, we don't have that yet, and I'm not sure offhand what the best practices are in the meantime. I believe @mppf is out today, but he is most familiar with the proposal and the work that inspired it, so may have ideas when he checks back in.

That issue's idea stemmed out of a package module that added Go-style channels (Channel) and may have also been of interest during the development of the Socket module. So searching for tests that make use of those modules to see whether they implement patterns like this might be informative as well (I haven't taken a look myself).

-Brad

Hi @rivalq

In addition to the Channel/Socket modules Brad linked, you may also want to look at the Futures module. This is similar to Python Futures or JavaScript Promises, you call async(someFunction, arg1, arg2, ....) and it returns a Future object, which you can then either explicitly waitFor at some point or chain with other Futures using andThen.

-Jade

Hii @jabraham @bradcray

Thanks for your response, this is similar to what i was looking.
But firstly I want to ask how socket library is currently being used? Lets say I have TCP socket opened, now I would have to keep it polling to check whether application has any data to read.
Is that polling done on seperate thread?

What I was proposing was single threaded async system i.e. event loop.
In cpp's boost io context, we can use it like below

#include <boost/asio.hpp>
#include <iostream>

int main() {
    boost::asio::io_context io_context;
    
    // Create a TCP socket
    boost::asio::ip::tcp::socket socket(io_context);
    
    try {
        // Connect to server (example IP and port)
        boost::asio::ip::tcp::endpoint endpoint(
            boost::asio::ip::address::from_string("127.0.0.1"), 8080);
        socket.connect(endpoint);
        
        // Buffer for receiving data
        char data[1024];
        
        // Non-blocking mode
        socket.non_blocking(true);
        
        while (true) {
            // Poll the io_context
            io_context.poll();
            
            // Try to read data
            boost::system::error_code error;
            size_t length = socket.read_some(boost::asio::buffer(data), error);
            
            if (!error) {
                // Process data
                std::cout << "Received: " << std::string(data, length) << std::endl;
            }
            else if (error == boost::asio::error::would_block) {
                // No data available, continue polling
                std::cout << "No data available" << std::endl;
            }
            else {
                // Error occurred
                std::cout << "Error: " << error.message() << std::endl;
                break;
            }
            
            // Sleep to avoid busy waiting
            boost::asio::deadline_timer timer(io_context, boost::posix_time::milliseconds(100));
            timer.wait();
        }
    }
    catch (const std::exception& e) {
        std::cerr << "Exception: " << e.what() << std::endl;
    }
    
    return 0;
}

Hi @rivalq,

I do not regularly work with the Channel or Socket modules, but as far as I know there is no ability to poll like you do here. I think the closest would be in Socket, using recv with a timeout and catching the thrown exception if no data is read.

For example uses of the Socket library, look in the Chapel test directory of test/library/packages/Socket/, which just contains some general correctness tests

-Jade

Just as another example, I have a very WIP Prometheus module implementation where I use the socket module to listen to a prometheus server and respond to it in an async task. The async task starts here where it is implemented in the serve method.

Engin

Hi @rivalq --

The Socket module was created as a Google Summer of Code project. One of the motivations was to demonstrate that Chapel's user-level threads can support many concurrent requests while still working with a task per connection, which is arguably the most programmable approach.

If you are able to use the Socket module, it should use sync variables and Chapel's user-level tasks to multiplex the work of processing many connections onto the cores on your system. It might not do that in every case you want it to. We have not studied its performance very much, but it does have this design.

We appreciate contributions / PRs to improve the Socket module, which was added in this PR: [GSoC 2021] Socket Library by king-11 · Pull Request #17960 · chapel-lang/chapel · GitHub

Hii @mppf ,

Thanks for response, I think have gotten some understanding of how sockets are currently being used in chapel ecosystem.

I want to know your opinion about having async IO in chapel, i.e. event loop kind of thing, ofcourse chapel has power of multithreading. But polling sockets using threads can have cost of context switches which won't be efficient in certain usecases, there async io seems to be good solution.

I can start writing io context (parallel to cpp's boost) like module in chapel for a proposal.

Fun Fact : That PR you linked was done by my batchmatch :stuck_out_tongue:

Hi @rivalq -

I have a few questions about the approach that it might be interesting to investigate / research:

First, what applications or libraries would this feature enable? The situations in which I can imagine using the Sockets library with a Chapel program are about integrating a server to monitor a computation / do in situ visualization / etc. These fit into an existing computation & the focus is more on the computation than on the server. As a result, these have relatively small demands on the Socket library in terms of performance, and productivity/convenience is probably more important. In your original post, you mentioned the use case is to use sockets when not wanting to use multi-threading. I might just be missing the point, but that seems like a weird situation for a Chapel program, since these are normally multi-core (and often multi-node) rather than sequential/serial.

Second, most systems that I'm aware of that use event systems like this are single threaded. Chapel is designed for parallel computing. How would the event system work with multiple Chapel tasks? Will it combine nicely or awkwardly with the parallel features of Chapel? Are you aware of multi-threaded event-based systems? How do they handle multiple threads/tasks/cores?

Third, I understand that performance is your main motivation for having an event-based I/O system. I am not so sure that the intuition about performance here is accurate. How do you know that single-threaded event-based will be faster than multi-core with user-level-threads doing context switching? Do you have any way to predict the performance of either approach without implementing everything?

Note that the current approach in the Sockets library is using an event-based system to notify the different user-level tasks. That allows it to use system calls like epoll to be efficient. But, there are certainly lots of potential problems with the current approach, including the fact that Chapel's tasks prioritize performance over fairness (in particular, tasks run until they end or they yield, which might not go well in a server setting where one might want to guarantee a certain amount of responsiveness; OTOH it would be fine for a server that "fits in" to an existing computation).

Fun Fact : That PR you linked was done by my batchmatch :stuck_out_tongue:

Not a big deal, but I don't know what a batchmatch is.

Anyway, thanks for your interest & willingness to implement something. It'd be nice to know more about the context for your question -- are you looking for a project for a class or something like that? If so it'd be good to know more about the timeline / size the project needs to be. We'd certainly appreciate help in improvements to Chapel's support for network programming.