Steven Rostedt [Wed, 25 Feb 2009 20:54:30 +0000 (15:54 -0500)]
tracing: wrap arguments with PARAMS
Peter Zijlstra warned that TPPROTO and TPARGS might become something
other than a simple copy of itself. To prevent this from having
side effects in the TRACE_FORMAT macro in tracepoint.h, we add a
PARAMS() macro to be defined as just a wrapper.
Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Steven Rostedt [Wed, 25 Feb 2009 20:49:52 +0000 (15:49 -0500)]
tracing: rename DEFINE_TRACE_FMT to just TRACE_FORMAT
There's been a bit confusion to whether DEFINE/DECLARE_TRACE_FMT should
be a DEFINE or a DECLARE. Ingo Molnar suggested simply calling it
TRACE_FORMAT.
Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Tejun Heo [Thu, 26 Feb 2009 01:54:17 +0000 (10:54 +0900)]
percpu: fix too low alignment restriction on UP
UP __alloc_percpu() triggered WARN_ON_ONCE() if the requested
alignment is larger than that of unsigned long long, which is too
small for all the cacheline aligned allocations. Bump it up to
SMP_CACHE_BYTES which kmalloc allocations generally guarantee.
Linus Torvalds [Wed, 25 Feb 2009 23:14:37 +0000 (15:14 -0800)]
Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
[IA64] Don't go beyond iosapic_intr_info's arraysize
[IA64] Do not go beyond ARRAY_SIZE of unw.hash
[IA64] enable setting DMAR on by default
Linus Torvalds [Wed, 25 Feb 2009 23:12:48 +0000 (15:12 -0800)]
Merge branch 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev:
[libata] pata_legacy: for VLB 32bit PIO don't try tricks with slop
[libata] pata_amd: program FIFO
sata_mv: fix SoC interrupt breakage
pata_it821x: resume from hibernation fails with RAID volume
Roland Dreier [Wed, 25 Feb 2009 21:27:46 +0000 (13:27 -0800)]
IB: Remove sysfs files before unregistering device
Move the ib_device_unregister_sysfs() call from ib_dealloc_device() to
ib_unregister_device(). The old code allows device unregister to
proceed even if some sysfs files are open, which leaves a window where
userspace can open a file before a device is removed but then end up
reading the file after the device is removed, which leads to various
kernel crashes either because the device data structure is freed or
because the low-level driver code is gone after module removal.
By not returning from ib_unregister_device() until after all sysfs
entries are removed, we make sure that data structures and/or module
code is not freed until after all sysfs access is done.
Reported-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Roland Dreier <rolandd@cisco.com>
Alan Cox [Wed, 11 Feb 2009 21:08:42 +0000 (13:08 -0800)]
[libata] pata_legacy: for VLB 32bit PIO don't try tricks with slop
These devices are generally used with ATA anyway and it seems that some
ATAPI will need us to issue the right number of words. Therefore as we
can't switch mid burst on VLB devices we should only use 32bit I/O for
suitable block sizes.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Alan Cox [Wed, 11 Feb 2009 21:08:41 +0000 (13:08 -0800)]
[libata] pata_amd: program FIFO
With 32bit PIO we can use the posted write buffers, but only for 32bit I/O
cycles. This means we must disable the FIFO for ATAPI where a final 16bit
cycle may occur.
Rework the FIFO logic so that we disable the FIFO then selectively
re-enable it when we set the timings on AMD devices. Also fix a case
where we scribbled on PCI config 0x41 of Nvidia chips when we shouldn't.
Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Mark Lord [Thu, 19 Feb 2009 15:38:04 +0000 (10:38 -0500)]
sata_mv: fix SoC interrupt breakage
For some reason, sata_mv doesn't clear interrupt status during init
when it's running on an SoC host adapter. If the bootloader has
touched the SATA controller before starting Linux, Linux can end up
enabling the SATA interrupt with events pending, which will cause the
interrupt to be marked as spurious and then be disabled, which then
breaks all further accesses to the controller.
This patch makes the SoC path clear interrupt status on init like in
the non-SoC case.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: Mark Lord <mlord@pobox.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Ondrej Zary [Wed, 11 Feb 2009 21:08:43 +0000 (13:08 -0800)]
pata_it821x: resume from hibernation fails with RAID volume
Hibernation didn't work for me since I started to use IT8212 controller.
I did some debugging (booting with no_console_suspend init=/bin/sh).
Found that resume fails (2.6.28) with "serial number mismatch 'some
garbage' != 'some other garbage'" and "revalidation failed" messages.
That's because the controller firmware fills different serial number in
the IDENTIFY every boot.
The patch below fixes the resume simply clearing the serial number. The
proper fix would be probably to fill in the serial number of the RAID
volume instead. I assume that there must be something like that stored on
the drives but I don't know where.
Fix resume on pata_it821x RAID volume by clearing the serial number in
IDENTIFY data, which is otherwise different on each boot.
Signed-off-by: Ondrej Zary <linux@rainbow-software.org> Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Hugh Dickins [Tue, 24 Feb 2009 20:51:52 +0000 (20:51 +0000)]
shmem: fix shared anonymous accounting
Each time I exit Firefox, /proc/meminfo's Committed_AS goes down almost
400 kB: OVERCOMMIT_NEVER would be allowing overcommits it should
prohibit.
Commit fc8744adc870a8d4366908221508bb113d8b72ee "Stop playing silly
games with the VM_ACCOUNT flag" changed shmem_file_setup() to set the
shmem file's VM_ACCOUNT flag according to VM_NORESERVE not being set in
the vma flags; but did so only _after_ the shmem_acct_size(flags, size)
call which is expected to pre-account a shared anonymous object.
It's all clearer if we switch shmem.c over to use VM_NORESERVE
throughout in place of !VM_ACCOUNT.
But I very nearly sent in a patch which mistakenly removed the
accounting from tmpfs files: shmem_get_inode()'s memset was good for not
setting VM_ACCOUNT, but now it needs to set VM_NORESERVE.
Rather than setting that by default, then perhaps clearing it again in
shmem_file_setup(), let's pass it as a flag to shmem_get_inode(): that
allows us to remove the #ifdef CONFIG_SHMEM from shmem_file_setup().
Kyle McMartin [Sat, 14 Feb 2009 09:11:29 +0000 (04:11 -0500)]
[IA64] enable setting DMAR on by default
The previous commit which introduced the DMAR_DEFAULT_ON setting in
drivers/pci/dmar.c neglected to add the ability for ia64 to enable
the IOMMU by default. Rectify that mistake, doh!
Signed-off-by: Kyle McMartin <kyle@redhat.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
During host driver module removal del_gendisk() results in a final
put on drive->gendev and freeing the drive by drive_release_dev().
Convert device drivers from using struct kref to use struct device
so device driver's object holds reference on ->gendev and prevents
drive from prematurely going away.
Also fix ->remove methods to not erroneously drop reference on a
host driver by using only put_device() instead of ide*_put().
Roel Kluin [Wed, 25 Feb 2009 19:28:22 +0000 (20:28 +0100)]
atiixp: fix missing parentheses
Fix missing parentheses so PIO/DMA timings for master device on the
second channel are programmed correctly (IOW "8 0 24 16" offset values
should be used instead of the current "8 0 16 16").
[ The bug went unnoticed because after PIO/DMA timings get programmed
incorrectly for the third device they are overwritten with timings
for the fourth device and since BIOS should also program timings for
the third device everything should work fine until suspend/resume
cycle or user requested transfer mode changes. ]
Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Andrew Morton <akpm@linux-foundation.org>
[bart: update patch description] Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Documentation/kernel-parameters.txt
- ide=nodma is no longer valid.
drivers/ide/Kconfig
- The module is ide-core.ko not ide.
drivers/ide/ide.c
- It took me a while to figure out what the arguments %d.%d:%d to nodma
module parameter ment, so I added a comment to each.
- Added a comment to each of the sscanf lines.
- There is a bug, if j is 0 it would previously clear all the other bits
except the current device, changed in three different places.
mask &= (1 << i) should be mask &= ~(1 << i).
Signed-off-by: David Fries <david@fries.net>
[bart: s/disk/device/ in ide.c, beautify patch description] Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Ingo Molnar [Sun, 22 Feb 2009 15:06:58 +0000 (16:06 +0100)]
time: ntp: clean up second_overflow()
Impact: cleanup, no functionality changed
The 'time_adj' local variable is named in a very confusing
way because it almost shadows the 'time_adjust' global
variable - which is used in this same function.
Rename it to 'delta' - to make them stand apart more clearly.
kernel/time/ntp.o:
text data bss dec hex filename
2545 114 144 2803 af3 ntp.o.before
2545 114 144 2803 af3 ntp.o.after
Ingo Molnar [Sun, 22 Feb 2009 15:03:37 +0000 (16:03 +0100)]
time: ntp: simplify ntp_tick_adj calculations
Impact: micro-optimization
Convert the (internal) ntp_tick_adj value we store from unscaled
units to scaled units. This is a constant that we never modify,
so scaling it up once during bootup is enough - we dont have to
do it for every adjustment step.
Ingo Molnar [Sun, 22 Feb 2009 12:38:40 +0000 (13:38 +0100)]
time: ntp: fix bug in ntp_update_offset() & do_adjtimex()
Impact: change (fix) the way the NTP PLL seconds offset is initialized/tracked
Fix a bug and do a micro-optimization:
When PLL is enabled we do not reset time_reftime. If the PLL
was off for a long time (for example after bootup), this is
arguably the wrong thing to do.
We already had a hack for the common boot-time case in
ntp_update_offset(), in form of:
But the update delta should be reset later on too - not just when
the PLL is enabled for the first time after bootup.
So do it on !STA_PLL -> STA_PLL transitions.
This changes behavior, as previously if ntpd was disabled for
a long time and we restarted it, we'd run from that last update,
with a very large delta.
Ingo Molnar [Sun, 22 Feb 2009 12:29:09 +0000 (13:29 +0100)]
time: ntp: micro-optimize ntp_update_offset()
Impact: cleanup, no functionality changed
The time_reftime update in ntp_update_offset() to xtime.tv_sec
is a convoluted way of saying that we want to freeze the frequency
and want the 'secs' delta to be 0. Also make this branch unlikely.
This shaves off 8 bytes from the code size:
text data bss dec hex filename
2504 114 136 2754 ac2 ntp.o.before
2496 114 136 2746 aba ntp.o.after
Linus Torvalds [Wed, 25 Feb 2009 17:34:27 +0000 (09:34 -0800)]
Merge branch 'for-linus' of git://neil.brown.name/md
* 'for-linus' of git://neil.brown.name/md:
md: avoid races when stopping resync.
md/raid10: Don't call bitmap_cond_end_sync when we are doing recovery.
md/raid10: Don't skip more than 1 bitmap-chunk at a time during recovery.
Fenghua Yu [Wed, 25 Feb 2009 05:06:26 +0000 (14:06 +0900)]
Fix iwlan DMA mapping direction
When iwlan runs on IOMMU, IOMMU generates a lot of PTE write faults
because PTE write bit is not set on some of PTE's. This is because
iwlan driver calls DMA mapping with PCI_DMA_TODEVICE which is read only
in mapping PTE. But iwlan device actually writes to the mapped page to
update its contents. This issue is not exposed in swiotlb. But VT-d
hardware can capture this fault and stop the fault transaction.
The following patch fixes the issue.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Bhavesh Davda <bhavesh@vmware.com> Tested-by: Chris Wright <chrisw@sous-sol.org> Acked-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Ingo Molnar [Wed, 25 Feb 2009 13:36:45 +0000 (14:36 +0100)]
alloc_percpu: fix UP build
Impact: build fix
the !SMP branch had a 'gfp' leftover:
include/linux/percpu.h: In function '__alloc_percpu':
include/linux/percpu.h:160: error: 'gfp' undeclared (first use in this function)
include/linux/percpu.h:160: error: (Each undeclared identifier is reported only once
include/linux/percpu.h:160: error: for each function it appears in.)
Peter Zijlstra [Wed, 25 Feb 2009 12:59:47 +0000 (13:59 +0100)]
generic-ipi: remove kmalloc()
Remove the use of kmalloc() from the smp_call_function_*()
calls.
Steven's generic-ipi patch (d7240b98: generic-ipi: use per cpu
data for single cpu ipi calls) started the discussion on the use
of kmalloc() in this code and fixed the
smp_call_function_single(.wait=0) fallback case.
In this patch we complete this by also providing means for the
_many() call, which fully removes the need for kmalloc() in this
code.
The problem with the _many() call is that other cpus might still
be observing our entry when we're done with it. It solved this
by dynamically allocating data elements and RCU-freeing it.
We solve it by using a single per-cpu entry which provides
static storage and solves one half of the problem (avoiding
referencing freed data).
The other half, ensuring the queue iteration it still possible,
is done by placing re-used entries at the head of the list. This
means that if someone was still iterating that entry when it got
moved, he will now re-visit the entries on the list he had
already seen, but avoids skipping over entries like would have
happened had we placed the new entry at the end.
Furthermore, visiting entries twice is not a problem, since we
remove our cpu from the entry's cpumask once its called.
Many thanks to Oleg for his suggestions and him poking holes in
my earlier attempts.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Now that several per-cpu files can be read or spliced at the
same, we want the read/splice callbacks for tracing files to be
reentrants.
Until now, a single global mutex (trace_types_lock) serialized
the access to tracing_read_pipe(), tracing_splice_read_pipe(),
and the seq helpers.
Ie: it means that if a user tries to read trace_pipe0 and
trace_pipe1 at the same time, the access to the function
tracing_read_pipe() is contended and one reader must wait for
the other to finish its read call.
The trace_type_lock mutex is mostly here to serialize the access
to the global current tracer (current_trace), which can be
changed concurrently. Although the iter struct keeps a private
pointer to this tracer, its callbacks can be changed by another
function.
The method used here is to not keep anymore private reference to
the tracer inside the iterator but to make a copy of it inside
the iterator. Then it checks on subsequents read calls if the
tracer has changed. This is not costly because the current
tracer is not expected to be changed often, so we use a branch
prediction for that.
Moreover, we add a private mutex to the iterator (there is one
iterator per file descriptor) to serialize the accesses in case
of multiple consumers per file descriptor (which would be a
silly idea from the user). Note that this is not to protect the
ring buffer, since the ring buffer already serializes the
readers accesses. This is to prevent from traces weirdness in
case of concurrent consumers. But these mutexes can be dropped
anyway, that would not result in any crash. Just tell me what
you think about it.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Currently, on the tracing debugfs directory, three files are
available to the user to let him extracting the trace output:
- trace is an iterator through the ring-buffer. It's a reader
but not a consumer It doesn't block when no more traces are
available.
- trace pretty similar to the former, except that it adds more
informations such as prempt count, irq flag, ...
- trace_pipe is a reader and a consumer, it will also block
waiting for traces if necessary (heh, yes it's a pipe).
The traces coming from different cpus are curretly mixed up
inside these files. Sometimes it messes up the informations,
sometimes it's useful, depending on what does the tracer
capture.
The tracing_cpumask file is useful to filter the output and
select only the traces captured a custom defined set of cpus.
But still it is not enough powerful to extract at the same time
one trace buffer per cpu.
So this patch creates a new directory: /debug/tracing/per_cpu/.
Inside this directory, you will now find one trace_pipe file and
one trace file per cpu.
Which means if you have two cpus, you will have:
trace0
trace1
trace_pipe0
trace_pipe1
And of course, reading these files will have the same effect
than with the usual tracing files, except that you will only see
the traces from the given cpu.
The original all-in-one cpu trace file are still available on
their original place.
Until now, only one consumer was allowed on trace_pipe to avoid
racy consuming on the ring-buffer. Now the approach changed a
bit, you can have only one consumer per cpu.
Which means you are allowed to read concurrently trace_pipe0 and
trace_pipe1 But you can't have two readers on trace_pipe0 or
trace_pipe1.
Following the same logic, if there is one reader on the common
trace_pipe, you can not have at the same time another reader on
trace_pipe0 or in trace_pipe1. Because in trace_pipe is already
a consumer in all cpu buffers in essence.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
gpu/drm, x86, PAT: io_mapping_create_wc and resource_size_t
io_mapping_create_wc should take a resource_size_t parameter in place of
unsigned long. With unsigned long, there will be no way to map greater than 4GB
address in i386/32 bit.
On x86, greater than 4GB addresses cannot be mapped on i386 without PAE. Return
error for such a case.
Patch also adds a structure for io_mapping, that saves the base, size and
type on HAVE_ATOMIC_IOMAP archs, that can be used to verify the offset on
io_mapping_map calls.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Dave Airlie <airlied@redhat.com> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Eric Anholt <eric@anholt.net> Cc: Keith Packard <keithp@keithp.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Nick Piggin [Wed, 25 Feb 2009 05:22:45 +0000 (06:22 +0100)]
generic IPI: simplify barriers and locking
Simplify the barriers in generic remote function call interrupt
code.
Firstly, just unconditionally take the lock and check the list
in the generic_call_function_single_interrupt IPI handler. As
we've just taken an IPI here, the chances are fairly high that
there will be work on the list for us, so do the locking
unconditionally. This removes the tricky lockless list_empty
check and dubious barriers. The change looks bigger than it is
because it is just removing an outer loop.
Secondly, clarify architecture specific IPI locking rules.
Generic code has no tools to impose any sane ordering on IPIs if
they go outside normal cache coherency, ergo the arch code must
make them appear to obey cache coherency as a "memory operation"
to initiate an IPI, and a "memory operation" to receive one.
This way at least they can be reasoned about in generic code,
and smp_mb used to provide ordering.
The combination of these two changes means that explict barriers
can be taken out of queue handling for the single case -- shared
data is explicitly locked, and ipi ordering must conform to
that, so no barriers needed. An extra barrier is needed in the
many handler, so as to ensure we load the list element after the
IPI is received.
Does any architecture actually *need* these barriers? For the
initiator I could see it, but for the handler I would be
surprised. So the other thing we could do for simplicity is just
to require that, rather than just matching with cache coherency,
we just require a full barrier before generating an IPI, and
after receiving an IPI. In which case, the smp_mb()s can go
away. But just for now, we'll be on the safe side and use the
barriers (they're in the slow case anyway).
Andreas Herrmann [Wed, 25 Feb 2009 10:28:58 +0000 (11:28 +0100)]
x86: memtest: adapt log messages
- print test pattern instead of pattern number,
- show pattern as stored in memory,
- use proper priority flags,
- consistent use of u64 throughout the code
Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Wed, 25 Feb 2009 10:03:44 +0000 (11:03 +0100)]
tracing: remove /debug/tracing/latency_trace
Impact: remove old debug/tracing API
/debug/tracing/latency_trace is an old legacy format we kept from
the old latency tracer. Remove the file for now. If there's any
useful bit missing then we'll propagate any useful output bits into
the /debug/tracing/trace output.
Reported-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
This patch adds two new device ids to the asix driver.
One comes directly from the asix driver on their web site, the other was
reported by Armani Liao as needed for the MSI X320 to get the driver to
work properly for it.
Reported-by: Armani Liao <aliao@novell.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Ron Mercer [Mon, 23 Feb 2009 10:42:17 +0000 (10:42 +0000)]
qlge: Use one path to (re)fill rx buffers.
Currently there are two paths for filling rx buffer queues. One is
used during initialization and the other during runtime. This patch
removes ql_alloc_sbq_buffers() and ql_alloc_lbq_buffers() and replaces
them with a call to the runtime functions ql_update_lbq() and
ql_update_sbq().
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ron Mercer [Mon, 23 Feb 2009 10:42:16 +0000 (10:42 +0000)]
qlge: Optimize rx buffer refill process.
RX Buffers are refilled in chunks of 16 at a time before notifying the
hardware with a register write. This can cause several writes to take
place in a given napi poll call. This change causes the write to take place
only once at the end of the call.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of taking/giving the hw semaphore repeatedly when iterating over
several frame to queue route settings, we have the caller hold it until
all are done.
This reduces PCI bus chatter and possible waits.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ron Mercer [Mon, 23 Feb 2009 10:42:14 +0000 (10:42 +0000)]
qlge: Increase MAC addr hw sem granularity.
Instead of taking/giving the semaphore repeatedly when iterating over
several adderesses, we have the caller hold it until all are done. This
reduces PCI bus chatter and possible waits.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ron Mercer [Mon, 23 Feb 2009 10:42:13 +0000 (10:42 +0000)]
qlge: Clean up mac address and frame route settings.
Setting MAC addresses and routing frames to various queues will need to
be done in response to firmware events as well as during initialization.
This change encapsulates the facilities into a single call that can
later me made from other places.
Signed-off-by: Ron Mercer <ron.mercer@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This patch changes the return value of nlmsg_notify() as follows:
If NETLINK_BROADCAST_ERROR is set by any of the listeners and
an error in the delivery happened, return the broadcast error;
else if there are no listeners apart from the socket that
requested a change with the echo flag, return the result of the
unicast notification. Thus, with this patch, the unicast
notification is handled in the same way of a broadcast listener
that has set the NETLINK_BROADCAST_ERROR socket flag.
This patch is useful in case that the caller of nlmsg_notify()
wants to know the result of the delivery of a netlink notification
(including the broadcast delivery) and take any action in case
that the delivery failed. For example, ctnetlink can drop packets
if the event delivery failed to provide reliable logging and
state-synchronization at the cost of dropping packets.
This patch also modifies the rtnetlink code to ignore the return
value of rtnl_notify() in all callers. The function rtnl_notify()
(before this patch) returned the error of the unicast notification
which makes rtnl_set_sk_err() reports errors to all listeners. This
is not of any help since the origin of the change (the socket that
requested the echoing) notices the ENOBUFS error if the notification
fails and should resync itself.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Acked-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
mv643xx_eth: set sane default receive coalescing timeout
A receive coalescing timeout of 250 usec appears to strike a good
balance between allowing enough received frames to be aggregated for
LRO to do its job and not allowing the connection to stall due to
delaying ACKs to the remote end for too long.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
mv643xx_eth: move a couple of init actions from ->open() to port probe
Move the netif_carrier_off() call in ->open() to port probe, so that
ethtool doesn't report the link as being up before we have up'd the
interface.
Move initialisation of the rx/tx coalescing timers from ->open() to
port probe, so that we don't reset the coalescing timers every time
the interface is up'd.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Rientjes [Wed, 25 Feb 2009 07:16:35 +0000 (09:16 +0200)]
slub: rename calculate_min_partial() to set_min_partial()
As suggested by Christoph Lameter, rename calculate_min_partial() to
set_min_partial() as the function doesn't really do any calculations.
Cc: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
[ARM] orion5x: ts78xx make more bulletproof the RTC load/unload code
Added checks to the platform_device_(register|add) calls so that if
a device failed to load it would then not later be unloaded; also
added the hooks so that it would not try to unload when the RTC
driver support is compiled out.
Signed-off-by: Alexander Clouter <alex@digriz.org.uk> Signed-off-by: Nicolas Pitre <nico@cam.org>
Dave Airlie [Wed, 25 Feb 2009 04:49:21 +0000 (14:49 +1000)]
drm/i915: make hw page ioremap use ioremap_wc
However we still have another issue with ioremap_wc not falling back
properly or somehow doing something else stupid, this probably needs
to be tracked down.