From: Antony Pavlov <antonynpavlov@gmail.com>
To: Daniele Lacamera <daniele.lacamera@tass.be>
Cc: barebox <barebox@lists.infradead.org>
Subject: Re: picotcp tftp support [was Adding IPv4 multicast support]
Date: Tue, 15 Jul 2014 23:02:13 +0400 [thread overview]
Message-ID: <20140715230213.c577abad777f1056b5ddd223@gmail.com> (raw)
In-Reply-To: <CAOngqVVGs6PzdwXPU5ON48k=d5SwdeLn8zTdQf6MYT4mgap3PA@mail.gmail.com>
On Tue, 15 Jul 2014 17:55:21 +0200
Daniele Lacamera <daniele.lacamera@tass.be> wrote:
> On Tue, Jul 15, 2014 at 2:57 PM, Antony Pavlov <antonynpavlov@gmail.com> wrote:
> > On Tue, 15 Jul 2014 12:57:21 +0200
> > Daniele Lacamera <daniele.lacamera@tass.be> wrote:
> >
> >> On Tue, Jul 15, 2014 at 12:27 PM, Antony Pavlov <antonynpavlov@gmail.com> wrote:
> >>
> >> >> I will be able to provide such an interface by using a similar
> >> >> approach to what you used for ping (so via net_poll() routine called
> >> >> in a loop), assuming that your posix-like interface expects blocking
> >> >> calls for read/write operations.
> >> >
> >> > Alas! We can't use this approach for tftp because tftp is a FILESYSTEM in barebox.
> >>
> >> Then again, I'd like to know if your FS implementation actually needs
> >> blocking call, and in case, where is the code supposed to block. Does
> >> barebox have some kind of support for multiple threads, or a default
> >> event loop where background operations can be added? Or are the FS
> >> calls non blocking?
> >
> > AFAIK barebox does not support threads.
> > Also all filesystem calls are blocking.
> >
>
> Then for me it is still not clear *where* a filesystem call is
> blocking. On a multi-threaded system, the calls is supposed to block
> somewhere in thread_sleep() functions. On a single thread execution
> model, we will need to tick the TCP/IP stack in the background while
> the call is "busy". Suppose that you issue a read call to any network
> filesystem (in this case TFTP issuing a get command). The file needs
> to be retreived by the TCP/IP stack, so I guess that the only
> implementation that makes sense would be something like:
>
> while (callback_has_not_been_called) {
> if(ctrlc())
> break;
> pico_stack_tick();
> // maybe sleep here
> }
>
> What you stated earlier, i.e.
> > Tftp user code know nothing about network stuff. User code just use read and write
> > for acessing file data, no matter which driver (ramdisk, SATA, MTD, I2C or network)
> > is used for actual data transfer.
>
> does not make much sense to me, as the network is required to access
> remote files, so the stack needs to tick constantly under the hood if
> you want to receive packets while your read call is blocked waiting
> for data.
>
> User does not need to know that her read call is going through the
> network, but the TFTP fs module (or another underlying "driver") is
> supposed to access the picotcp API properly to retrieve the data
> needed when I/O operations are running. The intermediate layer (the
> new fs/tftp.c) is supposed to issue a stack tick whenever it is
> suspended waiting for network events.
>
> Finally, I will assume that the use case needed is TFTP working in
> client mode, issuing GET/PUT commands upon open posix calls.
>
> I can provide to you (and the list) an example implementation, which I
> will develop on top of your latest picotcp branch and test via
> sandbox/tuntap. In the meanwhile, any comment on the topic by barebox
> developers are more than welcome.
First of all please make it possible to use several tftp clients simultaneously
(current picotcp tftp API has no "tftp session descriptor").
Here is a simple tftp barebox usecase (I have just checked it):
barebox:/ mkdir /mnt
barebox:/ dhcp # get $eth0.serverip from dhcp
barebox:/ mount -t tftp $eth0.serverip /mnt
barebox:/ cp /mnt/file1 /mnt/file2
In this sample we have two tftp clients working simultaneously,
One can easely construct more complex usecase, e.g. two tftp servers,
one is mounted to /mnt1 and another is mounted to /mnt2.
--
Best regards,
Antony Pavlov
_______________________________________________
barebox mailing list
barebox@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/barebox
next prev parent reply other threads:[~2014-07-15 18:50 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-13 10:11 Adding IPv4 multicast support Colin Leitner
2014-07-13 10:55 ` Antony Pavlov
2014-07-13 10:52 ` Colin Leitner
2014-07-13 13:15 ` Colin Leitner
2014-07-13 14:28 ` Daniele Lacamera
2014-07-15 7:01 ` picotcp tftp support [was Adding IPv4 multicast support] Antony Pavlov
2014-07-15 9:31 ` Daniele Lacamera
2014-07-15 10:27 ` Antony Pavlov
2014-07-15 10:57 ` Daniele Lacamera
2014-07-15 12:57 ` Antony Pavlov
2014-07-15 15:55 ` Daniele Lacamera
2014-07-15 19:02 ` Antony Pavlov [this message]
2014-07-16 6:30 ` Sascha Hauer
2014-07-16 6:48 ` Daniele Lacamera
2014-09-04 17:14 ` Antony Pavlov
2014-09-05 7:37 ` Daniele Lacamera
2014-09-26 9:27 ` PicoTCP
2014-09-28 14:22 ` Antony Pavlov
2014-09-29 9:45 ` Daniele Lacamera
2014-09-29 10:10 ` Michele Di Pede
2014-09-29 10:19 ` Antony Pavlov
2014-07-15 18:17 ` Adding IPv4 multicast support Colin Leitner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140715230213.c577abad777f1056b5ddd223@gmail.com \
--to=antonynpavlov@gmail.com \
--cc=barebox@lists.infradead.org \
--cc=daniele.lacamera@tass.be \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox