[PATCH 0/6] Add dynamic ftrace support for RISC-V platforms

19 views
Skip to first unread message

Alan Kao

unread,
Jan 10, 2018, 2:38:45 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
This patch set includes the building blocks of dynamic ftraces features
for RISC-V machines.

Alan Kao (6):
riscv/ftrace: Add RECORD_MCOUNT support
riscv/ftrace: Add dynamic function tracer support
riscv/ftrace: Add dynamic function graph tracer support
riscv/ftrace: Add ARCH_SUPPORTS_FTRACE_OPS support
riscv/ftrace: Add DYNAMIC_FTRACE_WITH_REGS support
riscv/ftrace: Add HAVE_FUNCTION_GRAPH_RET_ADDR_PTR support

arch/riscv/Kconfig | 3 +
arch/riscv/Makefile | 6 +-
arch/riscv/include/asm/ftrace.h | 47 ++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 136 +++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 244 ++++++++++++++++++++++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 ++--
arch/riscv/kernel/stacktrace.c | 6 +
scripts/recordmcount.pl | 5 +
9 files changed, 460 insertions(+), 14 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:38:48 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Now recordmcount.pl recognizes RISC-V object files. For the mechanism to
work, we have to disable the linker relaxation. This is because
relaxation happens after the script records offsets of _mcount call
sites, resulting in a unreliable record.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/Makefile | 6 +++++-
scripts/recordmcount.pl | 5 +++++
3 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 504ba386b22e..346dd1b0fb05 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -112,6 +112,7 @@ config ARCH_RV64I
select 64BIT
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FTRACE_MCOUNT_RECORD

endchoice

diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index 6719dd30ec5b..2bc39c6d9662 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -10,7 +10,11 @@

LDFLAGS :=
OBJCOPYFLAGS := -O binary
-LDFLAGS_vmlinux :=
+ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+ LDFLAGS_vmlinux := --no-relax
+else
+ LDFLAGS_vmlinux :=
+endif
KBUILD_AFLAGS_MODULE += -fPIC
KBUILD_CFLAGS_MODULE += -fPIC

diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index 2033af758173..d44d55db7c06 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -376,6 +376,11 @@ if ($arch eq "x86_64") {
$mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s__mcount\$";
$type = ".quad";
$alignment = 8;
+} elsif ($arch eq "riscv") {
+ $function_regex = "^([0-9a-fA-F]+)\\s+<([^.0-9][0-9a-zA-Z_\\.]+)>:";
+ $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL\\s_mcount\$";
+ $type = ".quad";
+ $alignment = 2;
} else {
die "Arch $arch is not supported with CONFIG_FTRACE_MCOUNT_RECORD";
}
--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:38:51 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
We now have dynamic ftrace with the following added items:

* ftrace_make_call, ftrace_make_nop (in kernel/ftrace.c)
The two functions turns any recorded call site of filtered functions
into a call to ftrace_caller or nops

* ftracce_update_ftrace_func (in kernel/ftrace.c)
turns the nops at ftrace_call into a call to a generic entry for
function tracers.

* ftrace_caller (in kernel/mcount-dyn.S)
The entry where each _mcount call site calls to once the function
is filtered to be traced.

Also, this patch fixes the semantic problems in mcount.S, which will be
treated as only a reference implementation once we have the dynamic
ftrace.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ftrace.h | 45 ++++++++++++++++++++
arch/riscv/kernel/Makefile | 5 ++-
arch/riscv/kernel/ftrace.c | 94 ++++++++++++++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 52 +++++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 ++++++----
6 files changed, 207 insertions(+), 12 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 346dd1b0fb05..96db66272db5 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -113,6 +113,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
+ select HAVE_DYNAMIC_FTRACE

endchoice

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 66d4175eb13e..acf0c7d001f3 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,3 +8,48 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+
+#ifndef __ASSEMBLY__
+void _mcount(void);
+static inline unsigned long ftrace_call_adjust(unsigned long addr)
+{
+ return addr;
+}
+
+struct dyn_arch_ftrace {
+};
+#endif
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+/*
+ * A general call in RISC-V is a pair of insts:
+ * 1) auipc: setting high-20 pc-related bits to ra register
+ * 2) jalr: setting low-12 offset to ra, jump to ra, and set ra to
+ * return address (original pc + 4)
+ *
+ * Dynamic ftrace generates probes to call sites, so we must deal with
+ * both auipc and jalr at the same time.
+ */
+
+#define MCOUNT_ADDR ((unsigned long)_mcount)
+#define JALR_SIGN_MASK (0x00000800)
+#define JALR_OFFSET_MASK (0x00000fff)
+#define AUIPC_OFFSET_MASK (0xfffff000)
+#define AUIPC_PAD (0x00001000)
+#define JALR_SHIFT 20
+#define JALR_BASIC (0x000080e7)
+#define AUIPC_BASIC (0x00000097)
+#define NOP4 (0x00000013)
+
+#define to_jalr_insn(_offset) \
+ (((_offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC)
+
+#define to_auipc_insn(_offset) ((_offset & JALR_SIGN_MASK) ? \
+ (((_offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \
+ ((_offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC))
+
+/*
+ * Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
+ */
+#define MCOUNT_INSN_SIZE 8
+#endif
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 196f62ffc428..d7bdf888f1ca 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -34,7 +34,8 @@ CFLAGS_setup.o := -mcmodel=medany
obj-$(CONFIG_SMP) += smpboot.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_MODULES) += module.o
-obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
-obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
+
+obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
+obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o

clean:
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index d0de68d144cb..49d2d799f532 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -6,9 +6,101 @@
*/

#include <linux/ftrace.h>
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+static int ftrace_check_current_call(unsigned long hook_pos,
+ unsigned int *expected)
+{
+ unsigned int replaced[2];
+ unsigned int nops[2] = {NOP4, NOP4};
+
+ /* we expect nops at the hook position */
+ if (!expected)
+ expected = nops;
+
+ /* read the text we want to modify */
+ if (probe_kernel_read(replaced, (void *)hook_pos, MCOUNT_INSN_SIZE))
+ return -EFAULT;
+
+ /* Make sure it is what we expect it to be */
+ if (replaced[0] != expected[0] || replaced[1] != expected[1]) {
+ pr_err("%p: expected (%08x %08x) but get (%08x %08x)",
+ (void *)hook_pos, expected[0], expected[1], replaced[0],
+ replaced[1]);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
+ bool enable)
+{
+ unsigned int offset = (unsigned int)(target - hook_pos);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ unsigned int nops[2] = {NOP4, NOP4};
+ int ret = 0;
+
+ /* when ftrace_make_nop is called */
+ if (!enable)
+ ret = ftrace_check_current_call(hook_pos, calls);
+
+ if (ret)
+ goto out;
+
+ /* replace the auipc-jalr pair at once */
+ ret = probe_kernel_write((void *)hook_pos, enable ? calls : nops,
+ MCOUNT_INSN_SIZE);
+ if (ret)
+ goto out;
+
+ smp_mb();
+ flush_icache_range((void *)hook_pos, (void *)hook_pos + MCOUNT_INSN_SIZE);
+
+out:
+ return ret;
+}
+
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+ int ret = ftrace_check_current_call(rec->ip, NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+
+int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ unsigned long addr)
+{
+ return __ftrace_modify_call(rec->ip, addr, false);
+}
+
+int ftrace_update_ftrace_func(ftrace_func_t func)
+{
+ int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
+ (unsigned long)func, true);
+ if (!ret) {
+ ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
+ (unsigned long)func, true);
+ }
+
+ return ret;
+}
+
+int __init ftrace_dyn_arch_init(void)
+{
+ return 0;
+}
+#endif

/*
- * Most of this file is copied from arm64.
+ * Most of this function is copied from arm64.
*/
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
new file mode 100644
index 000000000000..57f80fe09cbd
--- /dev/null
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2017 Andes Technology Corporation */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm-generic/export.h>
+#include <asm/ftrace.h>
+
+ .text
+
+ .macro SAVE_ABI_STATE
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ .endm
+
+ .macro RESTORE_ABI_STATE
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ .endm
+
+ENTRY(ftrace_caller)
+ /*
+ * a0: the address in the caller when calling ftrace_caller
+ * a1: the caller's return address
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ SAVE_ABI_STATE
+ftrace_call:
+ .global ftrace_call
+ /*
+ * For the dynamic ftrace to work, here we should reserve at least
+ * 8 bytes for a functional auipc-jalr pair. Pseudo inst nop may be
+ * interpreted as different length in different models, so we manually
+ * *expand* two 4-byte nops here.
+ *
+ * Calling ftrace_update_ftrace_func would overwrite the nops below.
+ * Check ftrace_modify_all_code for details.
+ */
+ addi x0, x0, 0
+ addi x0, x0, 0
+ RESTORE_ABI_STATE
+ ret
+ENDPROC(ftrace_caller)
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index c46a778627be..ce9bdc57a2a1 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -32,13 +32,13 @@
addi s0, sp, 32
.endm

- .macro STORE_ABI_STATE
+ .macro RESTORE_ABI_STATE
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16
.endm

- .macro STORE_RET_ABI_STATE
+ .macro RESTORE_RET_ABI_STATE
ld ra, 24(sp)
ld s0, 16(sp)
ld a0, 8(sp)
@@ -46,6 +46,10 @@
.endm

ENTRY(ftrace_stub)
+#ifdef CONFIG_DYNAMIC_FTRACE
+ .global _mcount
+ .set _mcount, ftrace_stub
+#endif
ret
ENDPROC(ftrace_stub)

@@ -66,15 +70,15 @@ ENTRY(return_to_handler)
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
mv a0, t6
#endif
- la t0, ftrace_return_to_handler
- jalr t0
+ call ftrace_return_to_handler
mv a1, a0
- STORE_RET_ABI_STATE
+ RESTORE_RET_ABI_STATE
jalr a1
ENDPROC(return_to_handler)
EXPORT_SYMBOL(return_to_handler)
#endif

+#ifndef CONFIG_DYNAMIC_FTRACE
ENTRY(_mcount)
la t4, ftrace_stub
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
@@ -104,9 +108,8 @@ do_ftrace_graph_caller:
ld a2, -16(s0)
#endif
SAVE_ABI_STATE
- la t0, prepare_ftrace_return
- jalr t0
- STORE_ABI_STATE
+ call prepare_ftrace_return
+ RESTORE_ABI_STATE
ret
#endif

@@ -120,7 +123,8 @@ do_trace:

SAVE_ABI_STATE
jalr t5
- STORE_ABI_STATE
+ RESTORE_ABI_STATE
ret
ENDPROC(_mcount)
EXPORT_SYMBOL(_mcount)
+#endif
--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:38:53 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Once the function_graph tracer is enabled, a filtered function has the
following call sequence:

* ftracer_caller ==> on/off by ftrace_make_call/ftrace_make_nop
* ftrace_graph_caller
* ftrace_graph_call ==> on/off by ftrace_en/disable_ftrace_graph_caller
* prepare_ftrace_return

Considering that the DYNAMIC_FTRACE_WITH_REGS feature, which introduces
another hook entry ftrace_regs_caller, will access to ftrace_graph_call
when needed, it would be more extendable to have a ftrace_graph_caller
function, instead of calling prepare_ftrace_return directly in ftrace_caller.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/kernel/ftrace.c | 25 +++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 65 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 49d2d799f532..239ef5d56f24 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -45,7 +45,7 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
unsigned int nops[2] = {NOP4, NOP4};
int ret = 0;

- /* when ftrace_make_nop is called */
+ /* for ftrace_make_nop and ftrace_disable_ftrace_graph_caller */
if (!enable)
ret = ftrace_check_current_call(hook_pos, calls);

@@ -99,6 +99,7 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
*/
@@ -131,3 +132,25 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;
*parent = return_hooker;
}
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+extern void ftrace_graph_call(void);
+int ftrace_enable_ftrace_graph_caller(void)
+{
+ int ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, true);
+}
+
+int ftrace_disable_ftrace_graph_caller(void)
+{
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, false);
+}
+#endif /* CONFIG_DYNAMIC_FTRACE */
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 57f80fe09cbd..64e715d4e180 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -14,18 +14,63 @@
.text

.macro SAVE_ABI_STATE
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ addi sp, sp, -48
+ sd s0, 32(sp)
+ sd ra, 40(sp)
+ addi s0, sp, 48
+ sd t0, 24(sp)
+ sd t1, 16(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ sd t2, 8(sp)
+#endif
+#else
addi sp, sp, -16
sd s0, 0(sp)
sd ra, 8(sp)
addi s0, sp, 16
+#endif
.endm

.macro RESTORE_ABI_STATE
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ ld s0, 32(sp)
+ ld ra, 40(sp)
+ addi sp, sp, 48
+#else
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16
+#endif
.endm

+ .macro RESTORE_GRAPH_ARGS
+ ld a0, 24(sp)
+ ld a1, 16(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, 8(sp)
+#endif
+ .endm
+
+ENTRY(ftrace_graph_caller)
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ftrace_graph_call:
+ .global ftrace_graph_call
+ /*
+ * Calling ftrace_enable/disable_ftrace_graph_caller would overwrite the
+ * nops below. Check ftrace_modify_all_code for details.
+ */
+ addi x0, x0, 0
+ addi x0, x0, 0
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ ret
+ENDPROC(ftrace_graph_caller)
+
ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
@@ -33,6 +78,20 @@ ENTRY(ftrace_caller)
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ /*
+ * the graph tracer (specifically, prepare_ftrace_return) needs these
+ * arguments but for now the function tracer occupies the regs, so we
+ * save them in temporary regs to recover later.
+ */
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+
SAVE_ABI_STATE
ftrace_call:
.global ftrace_call
@@ -47,6 +106,12 @@ ftrace_call:
*/
addi x0, x0, 0
addi x0, x0, 0
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_ARGS
+ call ftrace_graph_caller
+#endif
+
RESTORE_ABI_STATE
ret
ENDPROC(ftrace_caller)
--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:38:56 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/mcount-dyn.S | 3 +++
2 files changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index acf0c7d001f3..429a6a156645 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -9,6 +9,7 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif

+#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
void _mcount(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 64e715d4e180..627478571c7a 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -75,9 +75,12 @@ ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
* a1: the caller's return address
+ * a2: the address of global variable function_trace_op
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:38:59 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/kernel/ftrace.c | 17 ++++++
arch/riscv/kernel/mcount-dyn.S | 124 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 142 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 96db66272db5..06685bcf5643 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -114,6 +114,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS

endchoice

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 239ef5d56f24..c9cc884961d7 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -99,6 +99,23 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned int offset = (unsigned int)(old_addr - rec->ip);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ int ret = ftrace_check_current_call(rec->ip, calls);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 627478571c7a..3ec3ddbfb5e7 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -118,3 +118,127 @@ ftrace_call:
RESTORE_ABI_STATE
ret
ENDPROC(ftrace_caller)
+
+
+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+ .macro SAVE_ALL
+ addi sp, sp, -(PT_SIZE_ON_STACK+16)
+ sd s0, (PT_SIZE_ON_STACK)(sp)
+ sd ra, (PT_SIZE_ON_STACK+8)(sp)
+ addi s0, sp, (PT_SIZE_ON_STACK+16)
+
+ sd x1, PT_RA(sp)
+ sd x2, PT_SP(sp)
+ sd x3, PT_GP(sp)
+ sd x4, PT_TP(sp)
+ sd x5, PT_T0(sp)
+ sd x6, PT_T1(sp)
+ sd x7, PT_T2(sp)
+ sd x8, PT_S0(sp)
+ sd x9, PT_S1(sp)
+ sd x10, PT_A0(sp)
+ sd x11, PT_A1(sp)
+ sd x12, PT_A2(sp)
+ sd x13, PT_A3(sp)
+ sd x14, PT_A4(sp)
+ sd x15, PT_A5(sp)
+ sd x16, PT_A6(sp)
+ sd x17, PT_A7(sp)
+ sd x18, PT_S2(sp)
+ sd x19, PT_S3(sp)
+ sd x20, PT_S4(sp)
+ sd x21, PT_S5(sp)
+ sd x22, PT_S6(sp)
+ sd x23, PT_S7(sp)
+ sd x24, PT_S8(sp)
+ sd x25, PT_S9(sp)
+ sd x26, PT_S10(sp)
+ sd x27, PT_S11(sp)
+ sd x28, PT_T3(sp)
+ sd x29, PT_T4(sp)
+ sd x30, PT_T5(sp)
+ sd x31, PT_T6(sp)
+ .endm
+
+ .macro RESTORE_ALL
+ ld x1, PT_RA(sp)
+ ld x2, PT_SP(sp)
+ ld x3, PT_GP(sp)
+ ld x4, PT_TP(sp)
+ ld x5, PT_T0(sp)
+ ld x6, PT_T1(sp)
+ ld x7, PT_T2(sp)
+ ld x8, PT_S0(sp)
+ ld x9, PT_S1(sp)
+ ld x10, PT_A0(sp)
+ ld x11, PT_A1(sp)
+ ld x12, PT_A2(sp)
+ ld x13, PT_A3(sp)
+ ld x14, PT_A4(sp)
+ ld x15, PT_A5(sp)
+ ld x16, PT_A6(sp)
+ ld x17, PT_A7(sp)
+ ld x18, PT_S2(sp)
+ ld x19, PT_S3(sp)
+ ld x20, PT_S4(sp)
+ ld x21, PT_S5(sp)
+ ld x22, PT_S6(sp)
+ ld x23, PT_S7(sp)
+ ld x24, PT_S8(sp)
+ ld x25, PT_S9(sp)
+ ld x26, PT_S10(sp)
+ ld x27, PT_S11(sp)
+ ld x28, PT_T3(sp)
+ ld x29, PT_T4(sp)
+ ld x30, PT_T5(sp)
+ ld x31, PT_T6(sp)
+
+ ld s0, (PT_SIZE_ON_STACK)(sp)
+ ld ra, (PT_SIZE_ON_STACK+8)(sp)
+ addi sp, sp, (PT_SIZE_ON_STACK+16)
+ .endm
+
+ .macro RESTORE_GRAPH_REG_ARGS
+ ld a0, PT_T0(sp)
+ ld a1, PT_T1(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, PT_T2(sp)
+#endif
+ .endm
+
+/*
+ * Most of the contents are the same as ftrace_caller.
+ */
+ENTRY(ftrace_regs_caller)
+ /*
+ * a3: the address of all registers in the stack
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)
+ addi a3, sp, -(PT_SIZE_ON_STACK+16)
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+ SAVE_ALL
+
+ftrace_regs_call:
+ .global ftrace_regs_call
+ addi x0, x0, 0
+ addi x0, x0, 0
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_REG_ARGS
+ call ftrace_graph_caller
+#endif
+
+ RESTORE_ALL
+ ret
+ENDPROC(ftrace_regs_caller)
+#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
--
2.15.1

Alan Kao

unread,
Jan 10, 2018, 2:39:02 AM1/10/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
When doing unwinding in the function walk_stackframe, the pc now receives
the address from calling ftrace_graph_ret_addr instead of manual calculation.

Note that the original expression,
pc = frame->ra - 4
is buggy if the instruction at the return address happened to be a
compressed inst. But since it is not a critical part of ftrace and
is a RISC-V-specific behavior, it is ignored for now to ease the
review process.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/ftrace.c | 2 +-
arch/riscv/kernel/stacktrace.c | 6 ++++++
3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 429a6a156645..6e4b4c96b63e 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,6 +8,7 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index c9cc884961d7..e02ecd44fe47 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -144,7 +144,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;

err = ftrace_push_return_trace(old, self_addr, &trace.depth,
- frame_pointer, NULL);
+ frame_pointer, parent);
if (err == -EBUSY)
return;
*parent = return_hooker;
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
index 559aae781154..a4b1d94371a0 100644
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -18,6 +18,7 @@
#include <linux/sched/debug.h>
#include <linux/sched/task_stack.h>
#include <linux/stacktrace.h>
+#include <linux/ftrace.h>

#ifdef CONFIG_FRAME_POINTER

@@ -63,7 +64,12 @@ static void notrace walk_stackframe(struct task_struct *task,
frame = (struct stackframe *)fp - 1;
sp = fp;
fp = frame->fp;
+#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
+ pc = ftrace_graph_ret_addr(current, NULL, frame->ra,
+ (unsigned long *)(fp - 8));
+#else
pc = frame->ra - 0x4;
+#endif
}
}

--
2.15.1

Christoph Hellwig

unread,
Jan 10, 2018, 2:43:55 AM1/10/18
to Alan Kao, Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
On Wed, Jan 10, 2018 at 03:38:09PM +0800, Alan Kao wrote:
> -LDFLAGS_vmlinux :=
> +ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
> + LDFLAGS_vmlinux := --no-relax
> +else
> + LDFLAGS_vmlinux :=
> +endif

Why not:

LDFLAGS_vmlinux :=
ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
LDFLAGS_vmlinux += --no-relax
endif

Alan Kao

unread,
Jan 10, 2018, 2:54:26 AM1/10/18
to Christoph Hellwig, Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Thanks for the comment! This will be enhanced in the next try.

Alan

Steven Rostedt

unread,
Jan 10, 2018, 11:32:37 AM1/10/18
to Alan Kao, Palmer Dabbelt, Albert Ou, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
On Wed, 10 Jan 2018 15:38:10 +0800
Alan Kao <none...@gmail.com> wrote:

> +static int ftrace_check_current_call(unsigned long hook_pos,
> + unsigned int *expected)
> +{
> + unsigned int replaced[2];
> + unsigned int nops[2] = {NOP4, NOP4};
> +
> + /* we expect nops at the hook position */
> + if (!expected)
> + expected = nops;
> +
> + /* read the text we want to modify */
> + if (probe_kernel_read(replaced, (void *)hook_pos, MCOUNT_INSN_SIZE))
> + return -EFAULT;
> +
> + /* Make sure it is what we expect it to be */
> + if (replaced[0] != expected[0] || replaced[1] != expected[1]) {

I wonder if we can have a slight enhancement by doing:

if (memcmp(replaced, expected, sizeof(replaced)) != 0) {

-- Steve

Steven Rostedt

unread,
Jan 10, 2018, 11:36:46 AM1/10/18
to Alan Kao, Palmer Dabbelt, Albert Ou, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
On Wed, 10 Jan 2018 15:38:10 +0800
Alan Kao <none...@gmail.com> wrote:

> +static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
> + bool enable)
> +{
> + unsigned int offset = (unsigned int)(target - hook_pos);
> + unsigned int auipc_call = to_auipc_insn(offset);
> + unsigned int jalr_call = to_jalr_insn(offset);
> + unsigned int calls[2] = {auipc_call, jalr_call};
> + unsigned int nops[2] = {NOP4, NOP4};
> + int ret = 0;
> +
> + /* when ftrace_make_nop is called */
> + if (!enable)
> + ret = ftrace_check_current_call(hook_pos, calls);
> +
> + if (ret)
> + goto out;
> +
> + /* replace the auipc-jalr pair at once */
> + ret = probe_kernel_write((void *)hook_pos, enable ? calls : nops,
> + MCOUNT_INSN_SIZE);
> + if (ret)

You don't want to return the return of probe_kernel_write() here. For
ftrace bug to work properly, if a fail to write occurs, you need to
return -EPERM.

-- Steve

Alan Kao

unread,
Jan 11, 2018, 3:03:03 AM1/11/18
to Steven Rostedt, Palmer Dabbelt, Albert Ou, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Thanks for the correction and the style fix in the previous mail.
These problems will be fixed in the next version patch.

Alan

Alan Kao

unread,
Jan 15, 2018, 1:48:06 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
This patch set includes the building blocks of dynamic ftrace features
for RISC-V machines.

Changes in v2:
- Fix the return value as writing to kernel text goes wrong (2/6)
- Replace manual comparisons by calling memcmp (2/6)
- Simplify the conditional assignment in the Makefile (1/6)

Alan Kao (6):
riscv/ftrace: Add RECORD_MCOUNT support
riscv/ftrace: Add dynamic function tracer support
riscv/ftrace: Add dynamic function graph tracer support
riscv/ftrace: Add ARCH_SUPPORTS_FTRACE_OPS support
riscv/ftrace: Add DYNAMIC_FTRACE_WITH_REGS support
riscv/ftrace: Add HAVE_FUNCTION_GRAPH_RET_ADDR_PTR support

arch/riscv/Kconfig | 3 +
arch/riscv/Makefile | 3 +
arch/riscv/include/asm/ftrace.h | 47 ++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 142 ++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 244 ++++++++++++++++++++++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 ++--
arch/riscv/kernel/stacktrace.c | 6 +
scripts/recordmcount.pl | 5 +
9 files changed, 464 insertions(+), 13 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

--
2.15.1

Alan Kao

unread,
Jan 15, 2018, 1:48:09 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Now recordmcount.pl recognizes RISC-V object files. For the mechanism to
work, we have to disable the linker relaxation.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/Makefile | 3 +++
scripts/recordmcount.pl | 5 +++++
3 files changed, 9 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 40a67fc12328..1f45a76f412e 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -117,6 +117,7 @@ config ARCH_RV64I
select 64BIT
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FTRACE_MCOUNT_RECORD

endchoice

diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index 6719dd30ec5b..899226e0da7d 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -11,6 +11,9 @@
LDFLAGS :=
OBJCOPYFLAGS := -O binary
LDFLAGS_vmlinux :=
+ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+ LDFLAGS_vmlinux := --no-relax

Alan Kao

unread,
Jan 15, 2018, 1:48:12 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
We now have dynamic ftrace with the following added items:

* ftrace_make_call, ftrace_make_nop (in kernel/ftrace.c)
The two functions turns any recorded call site of filtered functions
into a call to ftrace_caller or nops

* ftracce_update_ftrace_func (in kernel/ftrace.c)
turns the nops at ftrace_call into a call to a generic entry for
function tracers.

* ftrace_caller (in kernel/mcount-dyn.S)
The entry where each _mcount call sites calls to once they are
filtered to be traced.

Also, this patch fixes the semantic problems in mcount.S, which will be
treated as only a reference implementation once we have the dynamic
ftrace.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ftrace.h | 45 ++++++++++++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 100 +++++++++++++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 52 +++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 +++++----
6 files changed, 213 insertions(+), 12 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 1f45a76f412e..a5c5c88700d4 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -118,6 +118,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
+ select HAVE_DYNAMIC_FTRACE

endchoice

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 66d4175eb13e..acf0c7d001f3 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,3 +8,48 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+
+#ifndef __ASSEMBLY__
+void _mcount(void);
+static inline unsigned long ftrace_call_adjust(unsigned long addr)
+{
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index d0de68d144cb..311c287433c9 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -6,9 +6,107 @@
*/

#include <linux/ftrace.h>
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+static int ftrace_check_current_call(unsigned long hook_pos,
+ unsigned int *expected)
+{
+ unsigned int replaced[2];
+ unsigned int nops[2] = {NOP4, NOP4};
+
+ /* we expect nops at the hook position */
+ if (!expected)
+ expected = nops;
+
+ /*
+ * Read the text we want to modify;
+ * return must be -EFAULT on read error
+ */
+ if (probe_kernel_read(replaced, (void *)hook_pos, MCOUNT_INSN_SIZE))
+ return -EFAULT;
+
+ /*
+ * Make sure it is what we expect it to be;
+ * return must be -EINVAL on failed comparison
+ */
+ if (memcmp(expected, replaced, sizeof(replaced))) {
+ pr_err("%p: expected (%08x %08x) but get (%08x %08x)",
+ (void *)hook_pos, expected[0], expected[1], replaced[0],
+ replaced[1]);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
+ bool enable)
+{
+ unsigned int offset = (unsigned int)(target - hook_pos);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ unsigned int nops[2] = {NOP4, NOP4};
+ int ret = 0;
+
+ /* when ftrace_make_nop is called */
+ if (!enable)
+ ret = ftrace_check_current_call(hook_pos, calls);
+
+ if (ret)
+ return ret;
+
+ /* replace the auipc-jalr pair at once */
+ ret = probe_kernel_write((void *)hook_pos, enable ? calls : nops,
+ MCOUNT_INSN_SIZE);
+ /* return must be -EPERM on write error */
+ if (ret)
+ return -EPERM;
+
+ smp_mb();
+ flush_icache_range((void *)hook_pos, (void *)hook_pos + MCOUNT_INSN_SIZE);
+
+ return 0;
+}
+
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+ int ret = ftrace_check_current_call(rec->ip, NULL);
+
+ if (ret)
/*
- * Most of this file is copied from arm64.
+ * Most of this function is copied from arm64.
*/
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
new file mode 100644
index 000000000000..57f80fe09cbd
--- /dev/null
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2017 Andes Technology Corporation */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm-generic/export.h>
+#include <asm/ftrace.h>
+
+ .text
+
+ .macro SAVE_ABI_STATE
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ .endm
+
+ .macro RESTORE_ABI_STATE
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ .endm
+
+ENTRY(ftrace_caller)
+ /*
+ * a0: the address in the caller when calling ftrace_caller
+ * a1: the caller's return address
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ SAVE_ABI_STATE
+ftrace_call:
+ .global ftrace_call
+ /*
+ * For the dynamic ftrace to work, here we should reserve at least
+ * 8 bytes for a functional auipc-jalr pair. Pseudo inst nop may be
+ * interpreted as different length in different models, so we manually
+ * *expand* two 4-byte nops here.
+ *
+ * Calling ftrace_update_ftrace_func would overwrite the nops below.
+ * Check ftrace_modify_all_code for details.
+ */
+ addi x0, x0, 0
+ addi x0, x0, 0
+ RESTORE_ABI_STATE
+ ret
+ENDPROC(ftrace_caller)
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index c46a778627be..ce9bdc57a2a1 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -32,13 +32,13 @@
addi s0, sp, 32
.endm

- .macro STORE_ABI_STATE
+ .macro RESTORE_ABI_STATE
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16

Alan Kao

unread,
Jan 15, 2018, 1:48:15 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Once the function_graph tracer is enabled, a filtered function has the
following call sequence:

* ftracer_caller ==> on/off by ftrace_make_call/ftrace_make_nop
* ftrace_graph_caller
* ftrace_graph_call ==> on/off by ftrace_en/disable_ftrace_graph_caller
* prepare_ftrace_return

Considering the following DYNAMIC_FTRACE_WITH_REGS feature, it would be
more extendable to have a ftrace_graph_caller function, instead of
calling prepare_ftrace_return directly in ftrace_caller.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/kernel/ftrace.c | 25 +++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 65 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 311c287433c9..dce1286af9b0 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -51,7 +51,7 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
unsigned int nops[2] = {NOP4, NOP4};
int ret = 0;

- /* when ftrace_make_nop is called */
+ /* for ftrace_make_nop and ftrace_disable_ftrace_graph_caller */
if (!enable)
ret = ftrace_check_current_call(hook_pos, calls);

@@ -105,6 +105,7 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
*/
@@ -137,3 +138,25 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;
*parent = return_hooker;
}
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+extern void ftrace_graph_call(void);
+int ftrace_enable_ftrace_graph_caller(void)
+{
+ int ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, true);
+}
+
+int ftrace_disable_ftrace_graph_caller(void)
+{
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, false);
+}
+#endif /* CONFIG_DYNAMIC_FTRACE */
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 57f80fe09cbd..64e715d4e180 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16
+#endif
.endm

+ .macro RESTORE_GRAPH_ARGS
+ ld a0, 24(sp)
+ ld a1, 16(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, 8(sp)
+#endif
+ .endm
+
+ENTRY(ftrace_graph_caller)
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ftrace_graph_call:
+ .global ftrace_graph_call
+ /*
+ * Calling ftrace_enable/disable_ftrace_graph_caller would overwrite the
+ * nops below. Check ftrace_modify_all_code for details.
+ */
+ addi x0, x0, 0
+ addi x0, x0, 0
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ ret
+ENDPROC(ftrace_graph_caller)
+
ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
@@ -33,6 +78,20 @@ ENTRY(ftrace_caller)
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ /*
+ * the graph tracer (specifically, prepare_ftrace_return) needs these
+ * arguments but for now the function tracer occupies the regs, so we
+ * save them in temporary regs to recover later.
+ */
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+
SAVE_ABI_STATE
ftrace_call:
.global ftrace_call
@@ -47,6 +106,12 @@ ftrace_call:
*/
addi x0, x0, 0
addi x0, x0, 0
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Jan 15, 2018, 1:48:18 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/mcount-dyn.S | 3 +++
2 files changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index acf0c7d001f3..429a6a156645 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -9,6 +9,7 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif

+#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
void _mcount(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 64e715d4e180..627478571c7a 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -75,9 +75,12 @@ ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
* a1: the caller's return address
+ * a2: the address of global variable function_trace_op
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)

Alan Kao

unread,
Jan 15, 2018, 1:48:21 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/kernel/ftrace.c | 17 ++++++
arch/riscv/kernel/mcount-dyn.S | 124 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 142 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index a5c5c88700d4..bb2c64976583 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -119,6 +119,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS

endchoice

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index dce1286af9b0..3922b29c107b 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -105,6 +105,23 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned int offset = (unsigned int)(old_addr - rec->ip);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ int ret = ftrace_check_current_call(rec->ip, calls);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 627478571c7a..3ec3ddbfb5e7 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
+ .endm
+
+ .endm
+
+ .macro RESTORE_GRAPH_REG_ARGS
+ ld a0, PT_T0(sp)
+ ld a1, PT_T1(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, PT_T2(sp)
+#endif
+ .endm
+
+/*
+ * Most of the contents are the same as ftrace_caller.
+ */
+ENTRY(ftrace_regs_caller)
+ /*
+ * a3: the address of all registers in the stack
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)
+ addi a3, sp, -(PT_SIZE_ON_STACK+16)
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+ SAVE_ALL
+
+ftrace_regs_call:
+ .global ftrace_regs_call
+ addi x0, x0, 0
+ addi x0, x0, 0
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_REG_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Jan 15, 2018, 1:48:24 AM1/15/18
to Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
In walk_stackframe, the pc now receives the address from calling
ftrace_graph_ret_addr instead of manual calculation.

Note that the original calculation,
pc = frame->ra - 4
is buggy when the instruction at the return address happened to be a
compressed inst. But since it is not a critical part of ftrace, it is
ignored for now to ease the review process.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/ftrace.c | 2 +-
arch/riscv/kernel/stacktrace.c | 6 ++++++
3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 429a6a156645..6e4b4c96b63e 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,6 +8,7 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 3922b29c107b..ded8ab25f4ad 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -150,7 +150,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;

Stefan O'Rear

unread,
Jan 15, 2018, 2:24:55 AM1/15/18
to pat...@groups.riscv.org, Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
On Sun, Jan 14, 2018 at 10:47 PM, Alan Kao <none...@gmail.com> wrote:
> + /*
> + * For the dynamic ftrace to work, here we should reserve at least
> + * 8 bytes for a functional auipc-jalr pair. Pseudo inst nop may be
> + * interpreted as different length in different models, so we manually
> + * *expand* two 4-byte nops here.
> + *
> + * Calling ftrace_update_ftrace_func would overwrite the nops below.
> + * Check ftrace_modify_all_code for details.
> + */
> + addi x0, x0, 0
> + addi x0, x0, 0

This relies on behavior of the assembler which is undocumented and, if
my reading of the specification is correct, a bug.

The documented way to assemble an sequence of 2 4-byte NOPs regardless
of subtarget is as follows:

.option push
.option norvc
nop
nop
.option pop

I have filed https://github.com/riscv/riscv-binutils-gdb/issues/135 to
get clarity on the assembler behavior; the explicit approach may be
preferable even if the assembler behavior turns out to be correct.

-s

Alan Kao

unread,
Jan 15, 2018, 2:38:29 AM1/15/18
to pat...@groups.riscv.org, Palmer Dabbelt, Albert Ou, Christoph Hellwig, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, linux-...@vger.kernel.org, Alan Kao, Greentime Hu
Thanks for pointing this out.

After checking the way other architectures have done this, I think we can
just put a

call ftrace_stub

here. Currently we don't support linker relaxing with ftrace, so this
macro will be expand to 8-byte inst-pair.

Alan

> --
> You received this message because you are subscribed to the Google Groups "RISC-V Patches" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to patches+u...@groups.riscv.org.
> To post to this group, send email to pat...@groups.riscv.org.
> Visit this group at https://groups.google.com/a/groups.riscv.org/group/patches/.
> To view this discussion on the web visit https://groups.google.com/a/groups.riscv.org/d/msgid/patches/CADJ6UvOZz%3DWZ18wYVhZcH0u58u_snDxmgbwsfTOS2ski-4O7tg%40mail.gmail.com.
> For more options, visit https://groups.google.com/a/groups.riscv.org/d/optout.

Alan Kao

unread,
Jan 18, 2018, 10:46:04 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
This patch set includes the building blocks of dynamic ftrace features
for RISC-V machines.

Changes in v3:
- Replace the nops at the tracer call sites into a "call ftrace_stub"
instruction for better understanding (1/6 and 2/6)

Changes in v2:
- Fix the return value as writing to kernel text goes wrong (2/6)
- Replace manual comparisons by calling memcmp (2/6)
- Simplify the conditional assignment in the Makefile (1/6)

Alan Kao (6):
riscv/ftrace: Add RECORD_MCOUNT support
riscv/ftrace: Add dynamic function tracer support
riscv/ftrace: Add dynamic function graph tracer support
riscv/ftrace: Add ARCH_SUPPORTS_FTRACE_OPS support
riscv/ftrace: Add DYNAMIC_FTRACE_WITH_REGS support
riscv/ftrace: Add HAVE_FUNCTION_GRAPH_RET_ADDR_PTR support

arch/riscv/Kconfig | 3 +
arch/riscv/Makefile | 3 +
arch/riscv/include/asm/ftrace.h | 47 ++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 142 ++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 241 ++++++++++++++++++++++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 ++--
arch/riscv/kernel/stacktrace.c | 6 +
scripts/recordmcount.pl | 5 +
9 files changed, 461 insertions(+), 13 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

--
2.15.1

Alan Kao

unread,
Jan 18, 2018, 10:46:07 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
Now recordmcount.pl recognizes RISC-V object files. For the mechanism to
work, we have to disable the linker relaxation.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/Makefile | 3 +++
scripts/recordmcount.pl | 5 +++++
3 files changed, 9 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 40a67fc12328..1f45a76f412e 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig

Alan Kao

unread,
Jan 18, 2018, 10:46:10 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
We now have dynamic ftrace with the following added items:

* ftrace_make_call, ftrace_make_nop (in kernel/ftrace.c)
The two functions turns any recorded call site of filtered functions
into a call to ftrace_caller or nops

* ftracce_update_ftrace_func (in kernel/ftrace.c)
turns the stub call at ftrace_call into a call to a generic entry for
function tracers.

* ftrace_caller (in kernel/mcount-dyn.S)
The entry where each _mcount call sites calls to once they are
filtered to be traced.

Also, this patch fixes the semantic problems in mcount.S, which will be
treated as only a reference implementation once we have the dynamic
ftrace.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ftrace.h | 45 ++++++++++++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 100 +++++++++++++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 50 ++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 +++++----
6 files changed, 211 insertions(+), 12 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 1f45a76f412e..a5c5c88700d4 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -118,6 +118,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
+ select HAVE_DYNAMIC_FTRACE

endchoice

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 66d4175eb13e..acf0c7d001f3 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,3 +8,48 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index d0de68d144cb..311c287433c9 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
+static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
+ bool enable)
+{
+ unsigned int offset = (unsigned int)(target - hook_pos);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ unsigned int nops[2] = {NOP4, NOP4};
+ int ret = 0;
+
+ /* when ftrace_make_nop is called */
+ if (!enable)
+ ret = ftrace_check_current_call(hook_pos, calls);
+
+ if (ret)
+ return ret;
+
+ /* replace the auipc-jalr pair at once */
+ ret = probe_kernel_write((void *)hook_pos, enable ? calls : nops,
+ MCOUNT_INSN_SIZE);
+ /* return must be -EPERM on write error */
+ if (ret)
+ return -EPERM;
+
+ smp_mb();
+ flush_icache_range((void *)hook_pos, (void *)hook_pos + MCOUNT_INSN_SIZE);
+
+ return 0;
+}
+
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+ int ret = ftrace_check_current_call(rec->ip, NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+
+int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ unsigned long addr)
+{
+ return __ftrace_modify_call(rec->ip, addr, false);
+}
+
+int ftrace_update_ftrace_func(ftrace_func_t func)
+{
+ int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
+ (unsigned long)func, true);
+ if (!ret) {
+ ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
+ (unsigned long)func, true);
+ }
+
+ return ret;
+}
+
+int __init ftrace_dyn_arch_init(void)
+{
+ return 0;
+}
+#endif

/*
- * Most of this file is copied from arm64.
+ * Most of this function is copied from arm64.
*/
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
new file mode 100644
index 000000000000..5731d9f56142
--- /dev/null
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2017 Andes Technology Corporation */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm-generic/export.h>
+#include <asm/ftrace.h>
+
+ .text
+
+ .macro SAVE_ABI_STATE
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ .endm
+
+ .macro RESTORE_ABI_STATE
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ .endm
+
+ENTRY(ftrace_caller)
+ /*
+ * a0: the address in the caller when calling ftrace_caller
+ * a1: the caller's return address
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ SAVE_ABI_STATE
+ftrace_call:
+ .global ftrace_call
+ /*
+ * For the dynamic ftrace to work, here we should reserve at least
+ * 8 bytes for a functional auipc-jalr pair. The following call
+ * means nothing but the required 8 bytes.
+ *
+ * Calling ftrace_update_ftrace_func would overwrite the call below.
+ * Check ftrace_modify_all_code for details.
+ */
+ call ftrace_stub
+ RESTORE_ABI_STATE
+ ret
+ENDPROC(ftrace_caller)
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index c46a778627be..ce9bdc57a2a1 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -32,13 +32,13 @@
addi s0, sp, 32
.endm

- .macro STORE_ABI_STATE
+ .macro RESTORE_ABI_STATE
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16

Alan Kao

unread,
Jan 18, 2018, 10:46:13 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
Once the function_graph tracer is enabled, a filtered function has the
following call sequence:

* ftracer_caller ==> on/off by ftrace_make_call/ftrace_make_nop
* ftrace_graph_caller
* ftrace_graph_call ==> on/off by ftrace_en/disable_ftrace_graph_caller
* prepare_ftrace_return

Considering the following DYNAMIC_FTRACE_WITH_REGS feature, it would be
more extendable to have a ftrace_graph_caller function, instead of
calling prepare_ftrace_return directly in ftrace_caller.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/kernel/ftrace.c | 25 ++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 64 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 88 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 311c287433c9..dce1286af9b0 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -51,7 +51,7 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
unsigned int nops[2] = {NOP4, NOP4};
int ret = 0;

- /* when ftrace_make_nop is called */
+ /* for ftrace_make_nop and ftrace_disable_ftrace_graph_caller */
if (!enable)
ret = ftrace_check_current_call(hook_pos, calls);

@@ -105,6 +105,7 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
*/
@@ -137,3 +138,25 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;
*parent = return_hooker;
}
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+extern void ftrace_graph_call(void);
+int ftrace_enable_ftrace_graph_caller(void)
+{
+ int ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, true);
+}
+
+int ftrace_disable_ftrace_graph_caller(void)
+{
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, false);
+}
+#endif /* CONFIG_DYNAMIC_FTRACE */
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 5731d9f56142..1ddf8b1a9345 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -14,18 +14,62 @@
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16
+#endif
.endm

+ .macro RESTORE_GRAPH_ARGS
+ ld a0, 24(sp)
+ ld a1, 16(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, 8(sp)
+#endif
+ .endm
+
+ENTRY(ftrace_graph_caller)
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ftrace_graph_call:
+ .global ftrace_graph_call
+ /*
+ * Calling ftrace_enable/disable_ftrace_graph_caller would overwrite the
+ * call below. Check ftrace_modify_all_code for details.
+ */
+ call ftrace_stub
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ ret
+ENDPROC(ftrace_graph_caller)
+
ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
@@ -33,6 +77,20 @@ ENTRY(ftrace_caller)
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ /*
+ * the graph tracer (specifically, prepare_ftrace_return) needs these
+ * arguments but for now the function tracer occupies the regs, so we
+ * save them in temporary regs to recover later.
+ */
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+
SAVE_ABI_STATE
ftrace_call:
.global ftrace_call
@@ -45,6 +103,12 @@ ftrace_call:
* Check ftrace_modify_all_code for details.
*/
call ftrace_stub
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Jan 18, 2018, 10:46:17 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/mcount-dyn.S | 3 +++
2 files changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index acf0c7d001f3..429a6a156645 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -9,6 +9,7 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif

+#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
void _mcount(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 1ddf8b1a9345..c8959bdd4f38 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -74,9 +74,12 @@ ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
* a1: the caller's return address
+ * a2: the address of global variable function_trace_op
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)

Alan Kao

unread,
Jan 18, 2018, 10:46:20 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/kernel/ftrace.c | 17 ++++++
arch/riscv/kernel/mcount-dyn.S | 124 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 142 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index a5c5c88700d4..bb2c64976583 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -119,6 +119,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS

endchoice

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index dce1286af9b0..3922b29c107b 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -105,6 +105,23 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned int offset = (unsigned int)(old_addr - rec->ip);
+ unsigned int auipc_call = to_auipc_insn(offset);
+ unsigned int jalr_call = to_jalr_insn(offset);
+ unsigned int calls[2] = {auipc_call, jalr_call};
+ int ret = ftrace_check_current_call(rec->ip, calls);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index c8959bdd4f38..e123ab7a8f32 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -115,3 +115,127 @@ ftrace_call:
+ .endm
+
+ .endm
+
+ .macro RESTORE_GRAPH_REG_ARGS
+ ld a0, PT_T0(sp)
+ ld a1, PT_T1(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, PT_T2(sp)
+#endif
+ .endm
+
+/*
+ * Most of the contents are the same as ftrace_caller.
+ */
+ENTRY(ftrace_regs_caller)
+ /*
+ * a3: the address of all registers in the stack
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)
+ addi a3, sp, -(PT_SIZE_ON_STACK+16)
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+ SAVE_ALL
+
+ftrace_regs_call:
+ .global ftrace_regs_call
+ addi x0, x0, 0
+ addi x0, x0, 0
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_REG_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Jan 18, 2018, 10:46:23 AM1/18/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, Masahiro Yamada, Kamil Rytarowski, Andrew Morton, pat...@groups.riscv.org, linux-...@vger.kernel.org, Stefan O'Rear, Alan Kao, Greentime Hu
In walk_stackframe, the pc now receives the address from calling
ftrace_graph_ret_addr instead of manual calculation.

Note that the original calculation,
pc = frame->ra - 4
is buggy when the instruction at the return address happened to be a
compressed inst. But since it is not a critical part of ftrace, it is
ignored for now to ease the review process.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/ftrace.c | 2 +-
arch/riscv/kernel/stacktrace.c | 6 ++++++
3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 429a6a156645..6e4b4c96b63e 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,6 +8,7 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 3922b29c107b..ded8ab25f4ad 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -150,7 +150,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;

Palmer Dabbelt

unread,
Feb 7, 2018, 7:37:12 PM2/7/18
to none...@gmail.com, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, yamada....@socionext.com, n...@gmx.com, ak...@linux-foundation.org, pat...@groups.riscv.org, linux-...@vger.kernel.org, sor...@gmail.com, ala...@andestech.com, gree...@andestech.com
Sorry it took me a while to get around to these. Do you mind submitting a v4
that's based on linux-4.15? I'm getting all sorts of merge errors trying to
"git am" these.

Alan Kao

unread,
Feb 7, 2018, 7:52:54 PM2/7/18
to Palmer Dabbelt, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, yamada....@socionext.com, n...@gmx.com, ak...@linux-foundation.org, pat...@groups.riscv.org, linux-...@vger.kernel.org, sor...@gmail.com, ala...@andestech.com, gree...@andestech.com
Sure. But do you mean the riscv-linux-4.15 at the github repo or
Linus's linux-4.15-rc......,says, rc8?

Also, we have fixed our internal email server issue, so I will send this
v4 set using ala...@andestech.com, by which I signed off all previous
ftrace patches, instead of the current gmail one. I hope this won't be
confusing to you.

Thanks,
Alan

Alan Kao

unread,
Feb 7, 2018, 9:57:58 PM2/7/18
to Palmer Dabbelt, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, yamada....@socionext.com, n...@gmx.com, ak...@linux-foundation.org, pat...@groups.riscv.org, linux-...@vger.kernel.org, sor...@gmail.com, ala...@andestech.com, gree...@andestech.com
On Wed, Feb 07, 2018 at 04:37:10PM -0800, Palmer Dabbelt wrote:
Sorry for the misunderstanding in the previous email. I will rebase the
set on linux-4.15. Thanks.

Alan Kao

unread,
Feb 13, 2018, 12:17:22 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
This patch set includes the building blocks of dynamic ftrace features
for RISC-V machines.

Changes in v4:
- Organize code structure according to changes in v3
- Rebase onto the riscv-linux-4.15 branch at github's
riscv/riscv-linux repo. Note that this set is based on the previous
ftrace patch, which provided basic support.

Changes in v3:
- Replace the nops at the tracer call sites into "call ftrace_stub"
instructions for better understanding (1/6, 2/6 and 5/6)

Changes in v2:
- Fix the return value as writing to kernel text goes wrong (2/6)
- Replace manual comparisons by calling memcmp (2/6)
- Simplify the conditional assignment in the Makefile (1/6)

Alan Kao (6):
riscv/ftrace: Add RECORD_MCOUNT support
riscv/ftrace: Add dynamic function tracer support
riscv/ftrace: Add dynamic function graph tracer support
riscv/ftrace: Add ARCH_SUPPORTS_FTRACE_OPS support
riscv/ftrace: Add DYNAMIC_FTRACE_WITH_REGS support
riscv/ftrace: Add HAVE_FUNCTION_GRAPH_RET_ADDR_PTR support

arch/riscv/Kconfig | 3 +
arch/riscv/Makefile | 3 +
arch/riscv/include/asm/ftrace.h | 56 ++++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 175 ++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 239 ++++++++++++++++++++++++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 ++--
arch/riscv/kernel/stacktrace.c | 6 +
scripts/recordmcount.pl | 5 +
9 files changed, 501 insertions(+), 13 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

--
2.15.1

Alan Kao

unread,
Feb 13, 2018, 12:17:24 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
Now recordmcount.pl recognizes RISC-V object files. For the mechanism to
work, we have to disable the linker relaxation.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/Makefile | 3 +++
scripts/recordmcount.pl | 5 +++++
3 files changed, 9 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 9e1074fd5cca..570af19a54ea 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -120,6 +120,7 @@ config ARCH_RV64I

Alan Kao

unread,
Feb 13, 2018, 12:17:29 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
We now have dynamic ftrace with the following added items:

* ftrace_make_call, ftrace_make_nop (in kernel/ftrace.c)
The two functions turn each recorded call site of filtered functions
into a call to ftrace_caller or nops

* ftracce_update_ftrace_func (in kernel/ftrace.c)
turns the nops at ftrace_call into a call to a generic entry for
function tracers.

* ftrace_caller (in kernel/mcount-dyn.S)
The entry where each _mcount call sites calls to once they are
filtered to be traced.

Also, this patch fixes the semantic problems in mcount.S, which will be
treated as only a reference implementation once we have the dynamic
ftrace.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ftrace.h | 54 +++++++++++++++++++++
arch/riscv/kernel/Makefile | 5 +-
arch/riscv/kernel/ftrace.c | 103 +++++++++++++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 50 +++++++++++++++++++
arch/riscv/kernel/mcount.S | 22 +++++----
6 files changed, 223 insertions(+), 12 deletions(-)
create mode 100644 arch/riscv/kernel/mcount-dyn.S

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 570af19a54ea..df05acbb53d3 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -121,6 +121,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
+ select HAVE_DYNAMIC_FTRACE

endchoice

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 66d4175eb13e..078743aacfd3 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,3 +8,57 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+#define make_call(caller, callee, call) \
+do { \
+ call[0] = to_auipc_insn((unsigned int)((unsigned long)callee - \
+ (unsigned long)caller)); \
+ call[1] = to_jalr_insn((unsigned int)((unsigned long)callee - \
+ (unsigned long)caller)); \
+} while (0)
+
+#define to_jalr_insn(offset) \
+ (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC)
+
+#define to_auipc_insn(offset) \
+ ((offset & JALR_SIGN_MASK) ? \
+ (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \
+ ((offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC))
+
+/*
+ * Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
+ */
+#define MCOUNT_INSN_SIZE 8
+#endif
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index 196f62ffc428..d7bdf888f1ca 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -34,7 +34,8 @@ CFLAGS_setup.o := -mcmodel=medany
obj-$(CONFIG_SMP) += smpboot.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_MODULES) += module.o
-obj-$(CONFIG_FUNCTION_TRACER) += mcount.o
-obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
+
+obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
+obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o

clean:
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index d0de68d144cb..be4b24332d97 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -6,9 +6,109 @@
*/

#include <linux/ftrace.h>
+#include <linux/uaccess.h>
+#include <asm/cacheflush.h>

+ unsigned int call[2];
+ unsigned int nops[2] = {NOP4, NOP4};
+ int ret = 0;
+
+ make_call(hook_pos, target, call);
+
+ /* replace the auipc-jalr pair at once */
+ ret = probe_kernel_write((void *)hook_pos, enable ? call : nops,
+ MCOUNT_INSN_SIZE);
+ /* return must be -EPERM on write error */
+ if (ret)
+ return -EPERM;
+
+ smp_mb();
+ flush_icache_range((void *)hook_pos, (void *)hook_pos + MCOUNT_INSN_SIZE);
+
+ return 0;
+}
+
+int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+{
+ int ret = ftrace_check_current_call(rec->ip, NULL);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+
+int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
+ unsigned long addr)
+{
+ unsigned int call[2];
+ int ret;
+
+ make_call(rec->ip, addr, call);
+ ret = ftrace_check_current_call(rec->ip, call);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, false);
+}
+
+int ftrace_update_ftrace_func(ftrace_func_t func)
+{
+ int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
+ (unsigned long)func, true);
+ if (!ret) {
+ ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
+ (unsigned long)func, true);
+ }
+
+ return ret;
+}
+
+int __init ftrace_dyn_arch_init(void)
+{
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
- * Most of this file is copied from arm64.
+ * Most of this function is copied from arm64.
*/
void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
unsigned long frame_pointer)
@@ -39,3 +139,4 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;
*parent = return_hooker;
}
+#endif
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
new file mode 100644
index 000000000000..a3ebeadbe698
--- /dev/null
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2017 Andes Technology Corporation */
+
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <asm/asm.h>
+#include <asm/csr.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+#include <asm-generic/export.h>
+#include <asm/ftrace.h>
+
+ .text
+
+ .macro SAVE_ABI_STATE
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ .endm
+
+ .macro RESTORE_ABI_STATE
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ .endm
+
+ENTRY(ftrace_caller)
+ /*
+ * a0: the address in the caller when calling ftrace_caller
+ * a1: the caller's return address
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ SAVE_ABI_STATE
+ftrace_call:
+ .global ftrace_call
+ /*
+ * For the dynamic ftrace to work, here we should reserve at least
+ * 8 bytes for a functional auipc-jalr pair. The following call
+ * serves this purpose.
+ *
+ * Calling ftrace_update_ftrace_func would overwrite the nops below.
+ * Check ftrace_modify_all_code for details.
+ */
+ call ftrace_stub
+ RESTORE_ABI_STATE
+ ret
+ENDPROC(ftrace_caller)
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index c46a778627be..ce9bdc57a2a1 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -32,13 +32,13 @@
addi s0, sp, 32
.endm

- .macro STORE_ABI_STATE
+ .macro RESTORE_ABI_STATE
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16

Alan Kao

unread,
Feb 13, 2018, 12:17:34 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
Once the function_graph tracer is enabled, a filtered function has the
following call sequence:

* ftracer_caller ==> on/off by ftrace_make_call/ftrace_make_nop
* ftrace_graph_caller
* ftrace_graph_call ==> on/off by ftrace_en/disable_ftrace_graph_caller
* prepare_ftrace_return

Considering the following DYNAMIC_FTRACE_WITH_REGS feature, it would be
more extendable to have a ftrace_graph_caller function, instead of
calling prepare_ftrace_return directly in ftrace_caller.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/kernel/ftrace.c | 55 +++++++++++++++++++++++++++++++++++-
arch/riscv/kernel/mcount-dyn.S | 64 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 118 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index be4b24332d97..5bbe1afd9463 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -139,4 +139,57 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;
*parent = return_hooker;
}
-#endif
+
+#ifdef CONFIG_DYNAMIC_FTRACE
+extern void ftrace_graph_call(void);
+int ftrace_enable_ftrace_graph_caller(void)
+{
+ unsigned int call[2];
+ static int init_graph = 1;
+ int ret;
+
+ make_call(&ftrace_graph_call, &ftrace_stub, call);
+
+ /*
+ * When enabling graph tracer for the first time, ftrace_graph_call
+ * should contains a call to ftrace_stub. Once it has been disabled,
+ * the 8-bytes at the position becomes NOPs.
+ */
+ if (init_graph) {
+ ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ call);
+ init_graph = 0;
+ } else {
+ ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ NULL);
+ }
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, true);
+}
+
+int ftrace_disable_ftrace_graph_caller(void)
+{
+ unsigned int call[2];
+ int ret;
+
+ make_call(&ftrace_graph_call, &prepare_ftrace_return, call);
+
+ /*
+ * This is to make sure that ftrace_enable_ftrace_graph_caller
+ * did the right thing.
+ */
+ ret = ftrace_check_current_call((unsigned long)&ftrace_graph_call,
+ call);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call((unsigned long)&ftrace_graph_call,
+ (unsigned long)&prepare_ftrace_return, false);
+}
+#endif /* CONFIG_DYNAMIC_FTRACE */
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index a3ebeadbe698..739e07a6fd85 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
ld ra, 8(sp)
ld s0, 0(sp)
addi sp, sp, 16
+#endif
.endm

+ .macro RESTORE_GRAPH_ARGS
+ ld a0, 24(sp)
+ ld a1, 16(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, 8(sp)
+#endif
+ .endm
+
+ENTRY(ftrace_graph_caller)
+ addi sp, sp, -16
+ sd s0, 0(sp)
+ sd ra, 8(sp)
+ addi s0, sp, 16
+ftrace_graph_call:
+ .global ftrace_graph_call
+ /*
+ * Calling ftrace_enable/disable_ftrace_graph_caller would overwrite the
+ * call below. Check ftrace_modify_all_code for details.
+ */
+ call ftrace_stub
+ ld ra, 8(sp)
+ ld s0, 0(sp)
+ addi sp, sp, 16
+ ret
+ENDPROC(ftrace_graph_caller)
+
ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
@@ -33,6 +77,20 @@ ENTRY(ftrace_caller)
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ /*
+ * the graph tracer (specifically, prepare_ftrace_return) needs these
+ * arguments but for now the function tracer occupies the regs, so we
+ * save them in temporary regs to recover later.
+ */
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+
SAVE_ABI_STATE
ftrace_call:
.global ftrace_call
@@ -45,6 +103,12 @@ ftrace_call:
* Check ftrace_modify_all_code for details.
*/
call ftrace_stub
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Feb 13, 2018, 12:17:38 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/mcount-dyn.S | 3 +++
2 files changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 078743aacfd3..fedadc40e358 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -9,6 +9,7 @@
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif

+#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
void _mcount(void);
static inline unsigned long ftrace_call_adjust(unsigned long addr)
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 739e07a6fd85..6bbc3f88fcb3 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -74,9 +74,12 @@ ENTRY(ftrace_caller)
/*
* a0: the address in the caller when calling ftrace_caller
* a1: the caller's return address
+ * a2: the address of global variable function_trace_op
*/
ld a1, -8(s0)
addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)

Alan Kao

unread,
Feb 13, 2018, 12:17:44 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ftrace.h | 1 +
arch/riscv/kernel/ftrace.c | 17 ++++++
arch/riscv/kernel/mcount-dyn.S | 122 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 141 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index df05acbb53d3..e4f9bac8ed76 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -122,6 +122,7 @@ config ARCH_RV64I
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
+ select HAVE_DYNAMIC_FTRACE_WITH_REGS

endchoice

diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index fedadc40e358..c6dcc5291f97 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -8,6 +8,7 @@
#if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_FRAME_POINTER)
#define HAVE_FUNCTION_GRAPH_FP_TEST
#endif
+#define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR

#define ARCH_SUPPORTS_FTRACE_OPS 1
#ifndef __ASSEMBLY__
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 5bbe1afd9463..48b5353691c3 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -106,6 +106,23 @@ int __init ftrace_dyn_arch_init(void)
}
#endif

+#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
+int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ unsigned long addr)
+{
+ unsigned int call[2];
+ int ret;
+
+ make_call(rec->ip, old_addr, call);
+ ret = ftrace_check_current_call(rec->ip, call);
+
+ if (ret)
+ return ret;
+
+ return __ftrace_modify_call(rec->ip, addr, true);
+}
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Most of this function is copied from arm64.
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 6bbc3f88fcb3..35a6ed76cb8b 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -115,3 +115,125 @@ ftrace_call:
RESTORE_ABI_STATE
ret
ENDPROC(ftrace_caller)
+
+ .endm
+
+ .endm
+
+ .macro RESTORE_GRAPH_REG_ARGS
+ ld a0, PT_T0(sp)
+ ld a1, PT_T1(sp)
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld a2, PT_T2(sp)
+#endif
+ .endm
+
+/*
+ * Most of the contents are the same as ftrace_caller.
+ */
+ENTRY(ftrace_regs_caller)
+ /*
+ * a3: the address of all registers in the stack
+ */
+ ld a1, -8(s0)
+ addi a0, ra, -MCOUNT_INSN_SIZE
+ la t5, function_trace_op
+ ld a2, 0(t5)
+ addi a3, sp, -(PT_SIZE_ON_STACK+16)
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ addi t0, s0, -8
+ mv t1, a0
+#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
+ ld t2, -16(s0)
+#endif
+#endif
+ SAVE_ALL
+
+ftrace_regs_call:
+ .global ftrace_regs_call
+ call ftrace_stub
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ RESTORE_GRAPH_REG_ARGS
+ call ftrace_graph_caller
+#endif
+

Alan Kao

unread,
Feb 13, 2018, 12:17:50 AM2/13/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, Alan Kao
In walk_stackframe, the pc now receives the address from calling
ftrace_graph_ret_addr instead of manual calculation.

Note that the original calculation,
pc = frame->ra - 4
is buggy when the instruction at the return address happened to be a
compressed inst. But since it is not a critical part of ftrace, it is
ignored for now to ease the review process.

Cc: Greentime Hu <gree...@andestech.com>
Signed-off-by: Alan Kao <ala...@andestech.com>
---
arch/riscv/kernel/ftrace.c | 2 +-
arch/riscv/kernel/stacktrace.c | 6 ++++++
2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 48b5353691c3..1157b6b52d25 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -151,7 +151,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
return;

Alan Kao

unread,
Feb 20, 2018, 9:49:02 PM2/20/18
to Palmer Dabbelt, Albert Ou, Steven Rostedt, Ingo Molnar, pat...@groups.riscv.org, linux-...@vger.kernel.org, Greentime Ying-Han Hu(胡英漢)
Any comments?

Palmer Dabbelt

unread,
Feb 21, 2018, 11:05:13 AM2/21/18
to ala...@andestech.com, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com, ala...@andestech.com
Thanks! I've added this to our for-next tree, it should be good to go for
4.17. Sorry this took a while, I'm still a bit behind from FOSDEM.

Alan Kao

unread,
Feb 22, 2018, 3:48:54 AM2/22/18
to Palmer Dabbelt, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, pat...@groups.riscv.org, linux-...@vger.kernel.org, Greentime Ying-Han Hu(胡英漢)
Thanks!

Alan

Palmer Dabbelt

unread,
Feb 23, 2018, 8:02:43 PM2/23/18
to ala...@andestech.com, alb...@sifive.com, ros...@goodmis.org, mi...@redhat.com, pat...@groups.riscv.org, linux-...@vger.kernel.org, gree...@andestech.com
Sorry, my email latency is a bit long now as I'm still backed up from FOSDEM.
This looks good, it should already be in our for-next branch.
Reply all
Reply to author
Forward
0 new messages