- comment for option "proto" (With Version 2.4.0 full dual-stack functionality is available, no need for udp6 (only IPv6!)...)
- add firewall rules for IPv6 connection
--> IPv6 setup/config with OpenVPN Custom Configuration (and/or further commit)
For some reason, changes introduced in this commit: 65b59a8dce (chg-release)/src-rt-6.x.4708/router/rc/transmission.c
To tune TCP buffers essentially did not work, because sysctl is not included in Tomato, so the only way is to "echo" the values directly.
Also backout to default values when transmission-daemon is stopped via the standard stop script included.
It's difficult to tell if increasing these buffers will help the router in any way while transmission-daemon is running, so only time will tell.
For some reason, changes introduced in this commit: 65b59a8dce (chg-release)/src-rt-6.x.4708/router/rc/transmission.c
To tune TCP buffers essentially did not work, because sysctl is not included in Tomato, so the only way is to "echo" the values directly.
Also backout to default values when transmission-daemon is stopped via the standard stop script included.
It's difficult to tell if increasing these buffers will help the router in any way while transmission-daemon is running, so only time will tell.
Captive Portal in TomatoUSB is based on NoCatSplash, which NEVER worked in ARM routers, and it's too old and doesn't support redirection to https sites properly.
It's broken in DD-WRT as well. While alternatives exists, porting them to TomatoUSB and replace NoCatSplash is a different story.
Captive Portal in TomatoUSB is based on NoCatSplash, which NEVER worked in ARM routers, and it's too old and doesn't support redirection to https sites properly.
It's broken in DD-WRT as well. While alternatives exists, porting them to TomatoUSB and replace NoCatSplash is a different story.
When insert_inode_locked() fails in ext2_new_inode() it most likely means inode bitmap got corrupted and we allocated again inode which is already in use. Also doing unlock_new_inode() during error recovery is wrong since the inode does not have I_NEW set. Fix the problem by informing about filesystem error and jumping to fail: (instead of fail_drop:) which doesn't call unlock_new_inode().
From upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ef6919c283257155def420bd247140e9fd2e9843
When insert_inode_locked() fails in ext2_new_inode() it most likely means inode bitmap got corrupted and we allocated again inode which is already in use. Also doing unlock_new_inode() during error recovery is wrong since the inode does not have I_NEW set. Fix the problem by informing about filesystem error and jumping to fail: (instead of fail_drop:) which doesn't call unlock_new_inode().
From upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ef6919c283257155def420bd247140e9fd2e9843
When insert_inode_locked() fails in ext3_new_inode() it most likely means inode bitmap got corrupted and we allocated again inode which is already in use. Also doing unlock_new_inode() during error recovery is wrong since inode does not have I_NEW set. Fix the problem by jumping to fail: (instead of fail_drop:) which declares filesystem error and does not call unlock_new_inode().
Per upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1415dd8705394399d59a3df1ab48d149e1e41e77
When insert_inode_locked() fails in ext3_new_inode() it most likely means inode bitmap got corrupted and we allocated again inode which is already in use. Also doing unlock_new_inode() during error recovery is wrong since inode does not have I_NEW set. Fix the problem by jumping to fail: (instead of fail_drop:) which declares filesystem error and does not call unlock_new_inode().
Per upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1415dd8705394399d59a3df1ab48d149e1e41e77
The name_len variable in CIFSFindNext is a signed int that gets set to the resume_name_len in the cifs_search_info. The resume_name_len however is unsigned and for some infolevels is populated directly from a 32 bit value sent by the server.
If the server sends a very large value for this, then that value could look negative when converted to a signed int. That would make that value pass the PATH_MAX check later in CIFSFindNext. The name_len would then be used as a length value for a memcpy. It would then be treated as unsigned again, and the memcpy scribbles over a ton of memory.
Fix this by making the name_len an unsigned value in CIFSFindNext.
Per upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9438fabb73eb48055b58b89fc51e0bc4db22fabd
The name_len variable in CIFSFindNext is a signed int that gets set to the resume_name_len in the cifs_search_info. The resume_name_len however is unsigned and for some infolevels is populated directly from a 32 bit value sent by the server.
If the server sends a very large value for this, then that value could look negative when converted to a signed int. That would make that value pass the PATH_MAX check later in CIFSFindNext. The name_len would then be used as a length value for a memcpy. It would then be treated as unsigned again, and the memcpy scribbles over a ton of memory.
Fix this by making the name_len an unsigned value in CIFSFindNext.
Per upstream: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9438fabb73eb48055b58b89fc51e0bc4db22fabd