mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2026-02-28 19:06:51 +01:00
Pull power management updates from Rafael Wysocki:
"By the number of commits, cpufreq is the leading party (again) and the
most visible change there is the removal of the omap-cpufreq driver
that has not been used for a long time (good riddance). There are also
quite a few changes in the cppc_cpufreq driver, mostly related to
fixing its frequency invariance engine in the case when the CPPC
registers used by it are not in PCC. In addition to that, support for
AM62L3 is added to the ti-cpufreq driver and the cpufreq-dt-platdev
list is updated for some platforms. The remaining cpufreq changes are
assorted fixes and cleanups.
Next up is cpuidle and the changes there are dominated by intel_idle
driver updates, mostly related to the new command line facility
allowing users to adjust the list of C-states used by the driver.
There are also a few updates of cpuidle governors, including two menu
governor fixes and some refinements of the teo governor, and a
MAINTAINERS update adding Christian Loehle as a cpuidle reviewer.
[Thanks for stepping up Christian!]
The most significant update related to system suspend and hibernation
is the one to stop freezing the PM runtime workqueue during system PM
transitions which allows some deadlocks to be avoided. There is also a
fix for possible concurrent bit field updates in the core device
suspend code and a few other minor fixes.
Apart from the above, several drivers are updated to discard the
return value of pm_runtime_put() which is going to be converted to a
void function as soon as everybody stops using its return value, PL4
support for Ice Lake is added to the Intel RAPL power capping driver,
and there are assorted cleanups, documentation fixes, and some
cpupower utility improvements.
Specifics:
- Remove the unused omap-cpufreq driver (Andreas Kemnade)
- Optimize error handling code in cpufreq_boost_trigger_state() and
make cpufreq_boost_trigger_state() return -EOPNOTSUPP if no policy
supports boost (Lifeng Zheng)
- Update cpufreq-dt-platdev list for tegra, qcom, TI (Aaron Kling,
Dhruva Gole, and Konrad Dybcio)
- Minor improvements to the cpufreq and cpumask rust implementation
(Alexandre Courbot, Alice Ryhl, Tamir Duberstein, and Yilin Chen)
- Add support for AM62L3 SoC to the ti-cpufreq driver (Dhruva Gole)
- Update arch_freq_scale in the CPPC cpufreq driver's frequency
invariance engine (FIE) in scheduler ticks if the related CPPC
registers are not in PCC (Jie Zhan)
- Assorted minor cleanups and improvements in ARM cpufreq drivers
(Juan Martinez, Felix Gu, Luca Weiss, and Sergey Shtylyov)
- Add generic helpers for sysfs show/store to cppc_cpufreq (Sumit
Gupta)
- Make the scaling_setspeed cpufreq sysfs attribute return the actual
requested frequency to avoid confusion (Pengjie Zhang)
- Simplify the idle CPU time granularity test in the ondemand cpufreq
governor (Frederic Weisbecker)
- Enable asym capacity in intel_pstate only when CPU SMT is not
possible (Yaxiong Tian)
- Update the description of rate_limit_us default value in cpufreq
documentation (Yaxiong Tian)
- Add a command line option to adjust the C-states table in the
intel_idle driver, remove the 'preferred_cstates' module parameter
from it, add C-states validation to it and clean it up (Artem
Bityutskiy)
- Make the menu cpuidle governor always check the time till the
closest timer event when the scheduler tick has been stopped to
prevent it from mistakenly selecting the deepest available idle
state (Rafael Wysocki)
- Update the teo cpuidle governor to avoid making suboptimal
decisions in certain corner cases and generally improve idle state
selection accuracy (Rafael Wysocki)
- Remove an unlikely() annotation on the early-return condition in
menu_select() that leads to branch misprediction 100% of the time
on systems with only 1 idle state enabled, like ARM64 servers
(Breno Leitao)
- Add Christian Loehle to MAINTAINERS as a cpuidle reviewer
(Christian Loehle)
- Stop flagging the PM runtime workqueue as freezable to avoid system
suspend and resume deadlocks in subsystems that assume asynchronous
runtime PM to work during system-wide PM transitions (Rafael
Wysocki)
- Drop redundant NULL pointer checks before acomp_request_free() from
the hibernation code handling image saving (Rafael Wysocki)
- Update wakeup_sources_walk_start() to handle empty lists of wakeup
sources as appropriate (Samuel Wu)
- Make dev_pm_clear_wake_irq() check the power.wakeirq value under
power.lock to avoid race conditions (Gui-Dong Han)
- Avoid bit field races related to power.work_in_progress in the core
device suspend code (Xuewen Yan)
- Make several drivers discard pm_runtime_put() return value in
preparation for converting that function to a void one (Rafael
Wysocki)
- Add PL4 support for Ice Lake to the Intel RAPL power capping driver
(Daniel Tang)
- Replace sprintf() with sysfs_emit() in power capping sysfs show
functions (Sumeet Pawnikar)
- Make dev_pm_opp_get_level() return value match the documentation
after a previous update of the latter (Aleks Todorov)
- Use scoped for each OF child loop in the OPP code (Krzysztof
Kozlowski)
- Fix a bug in an example code snippet and correct typos in the
energy model management documentation (Patrick Little)
- Fix miscellaneous problems in cpupower (Kaushlendra Kumar):
* idle_monitor: Fix incorrect value logged after stop
* Fix inverted APERF capability check
* Use strcspn() to strip trailing newline
* Reset errno before strtoull()
* Show C0 in idle-info dump
- Improve cpupower installation procedure by making the systemd step
optional and allowing users to disable the installation of
systemd's unit file (João Marcos Costa)"
* tag 'pm-6.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (65 commits)
PM: sleep: core: Avoid bit field races related to work_in_progress
PM: sleep: wakeirq: harden dev_pm_clear_wake_irq() against races
cpufreq: Documentation: Update description of rate_limit_us default value
cpufreq: intel_pstate: Enable asym capacity only when CPU SMT is not possible
PM: wakeup: Handle empty list in wakeup_sources_walk_start()
PM: EM: Documentation: Fix bug in example code snippet
Documentation: Fix typos in energy model documentation
cpuidle: governors: teo: Refine intercepts-based idle state lookup
cpuidle: governors: teo: Adjust the classification of wakeup events
cpufreq: ondemand: Simplify idle cputime granularity test
cpufreq: userspace: make scaling_setspeed return the actual requested frequency
PM: hibernate: Drop NULL pointer checks before acomp_request_free()
cpufreq: CPPC: Add generic helpers for sysfs show/store
cpufreq: scmi: Fix device_node reference leak in scmi_cpu_domain_id()
cpufreq: ti-cpufreq: add support for AM62L3 SoC
cpufreq: dt-platdev: Add ti,am62l3 to blocklist
cpufreq/amd-pstate: Add comment explaining nominal_perf usage for performance policy
cpufreq: scmi: correct SCMI explanation
cpufreq: dt-platdev: Block the driver from probing on more QC platforms
rust: cpumask: rename methods of Cpumask for clarity and consistency
...
283 lines
7.1 KiB
C
283 lines
7.1 KiB
C
/* SPDX-License-Identifier: GPL-2.0-only */
|
|
/*
|
|
* CPPC (Collaborative Processor Performance Control) methods used
|
|
* by CPUfreq drivers.
|
|
*
|
|
* (C) Copyright 2014, 2015 Linaro Ltd.
|
|
* Author: Ashwin Chaugule <ashwin.chaugule@linaro.org>
|
|
*/
|
|
|
|
#ifndef _CPPC_ACPI_H
|
|
#define _CPPC_ACPI_H
|
|
|
|
#include <linux/acpi.h>
|
|
#include <linux/cpufreq.h>
|
|
#include <linux/types.h>
|
|
|
|
#include <acpi/pcc.h>
|
|
#include <acpi/processor.h>
|
|
|
|
/* CPPCv2 and CPPCv3 support */
|
|
#define CPPC_V2_REV 2
|
|
#define CPPC_V3_REV 3
|
|
#define CPPC_V2_NUM_ENT 21
|
|
#define CPPC_V3_NUM_ENT 23
|
|
|
|
#define PCC_CMD_COMPLETE_MASK (1 << 0)
|
|
#define PCC_ERROR_MASK (1 << 2)
|
|
|
|
#define MAX_CPC_REG_ENT 21
|
|
|
|
/* CPPC specific PCC commands. */
|
|
#define CMD_READ 0
|
|
#define CMD_WRITE 1
|
|
|
|
#define CPPC_AUTO_ACT_WINDOW_SIG_BIT_SIZE (7)
|
|
#define CPPC_AUTO_ACT_WINDOW_EXP_BIT_SIZE (3)
|
|
#define CPPC_AUTO_ACT_WINDOW_MAX_SIG ((1 << CPPC_AUTO_ACT_WINDOW_SIG_BIT_SIZE) - 1)
|
|
#define CPPC_AUTO_ACT_WINDOW_MAX_EXP ((1 << CPPC_AUTO_ACT_WINDOW_EXP_BIT_SIZE) - 1)
|
|
/* CPPC_AUTO_ACT_WINDOW_MAX_SIG is 127, so 128 and 129 will decay to 127 when writing */
|
|
#define CPPC_AUTO_ACT_WINDOW_SIG_CARRY_THRESH 129
|
|
|
|
#define CPPC_EPP_PERFORMANCE_PREF 0x00
|
|
#define CPPC_EPP_ENERGY_EFFICIENCY_PREF 0xFF
|
|
|
|
/* Each register has the folowing format. */
|
|
struct cpc_reg {
|
|
u8 descriptor;
|
|
u16 length;
|
|
u8 space_id;
|
|
u8 bit_width;
|
|
u8 bit_offset;
|
|
u8 access_width;
|
|
u64 address;
|
|
} __packed;
|
|
|
|
/*
|
|
* Each entry in the CPC table is either
|
|
* of type ACPI_TYPE_BUFFER or
|
|
* ACPI_TYPE_INTEGER.
|
|
*/
|
|
struct cpc_register_resource {
|
|
acpi_object_type type;
|
|
u64 __iomem *sys_mem_vaddr;
|
|
union {
|
|
struct cpc_reg reg;
|
|
u64 int_value;
|
|
} cpc_entry;
|
|
};
|
|
|
|
/* Container to hold the CPC details for each CPU */
|
|
struct cpc_desc {
|
|
int num_entries;
|
|
int version;
|
|
int cpu_id;
|
|
int write_cmd_status;
|
|
int write_cmd_id;
|
|
/* Lock used for RMW operations in cpc_write() */
|
|
raw_spinlock_t rmw_lock;
|
|
struct cpc_register_resource cpc_regs[MAX_CPC_REG_ENT];
|
|
struct acpi_psd_package domain_info;
|
|
struct kobject kobj;
|
|
};
|
|
|
|
/* These are indexes into the per-cpu cpc_regs[]. Order is important. */
|
|
enum cppc_regs {
|
|
HIGHEST_PERF,
|
|
NOMINAL_PERF,
|
|
LOW_NON_LINEAR_PERF,
|
|
LOWEST_PERF,
|
|
GUARANTEED_PERF,
|
|
DESIRED_PERF,
|
|
MIN_PERF,
|
|
MAX_PERF,
|
|
PERF_REDUC_TOLERANCE,
|
|
TIME_WINDOW,
|
|
CTR_WRAP_TIME,
|
|
REFERENCE_CTR,
|
|
DELIVERED_CTR,
|
|
PERF_LIMITED,
|
|
ENABLE,
|
|
AUTO_SEL_ENABLE,
|
|
AUTO_ACT_WINDOW,
|
|
ENERGY_PERF,
|
|
REFERENCE_PERF,
|
|
LOWEST_FREQ,
|
|
NOMINAL_FREQ,
|
|
};
|
|
|
|
/*
|
|
* Categorization of registers as described
|
|
* in the ACPI v.5.1 spec.
|
|
* XXX: Only filling up ones which are used by governors
|
|
* today.
|
|
*/
|
|
struct cppc_perf_caps {
|
|
u32 guaranteed_perf;
|
|
u32 highest_perf;
|
|
u32 nominal_perf;
|
|
u32 lowest_perf;
|
|
u32 lowest_nonlinear_perf;
|
|
u32 lowest_freq;
|
|
u32 nominal_freq;
|
|
};
|
|
|
|
struct cppc_perf_ctrls {
|
|
u32 max_perf;
|
|
u32 min_perf;
|
|
u32 desired_perf;
|
|
u32 energy_perf;
|
|
bool auto_sel;
|
|
};
|
|
|
|
struct cppc_perf_fb_ctrs {
|
|
u64 reference;
|
|
u64 delivered;
|
|
u64 reference_perf;
|
|
u64 wraparound_time;
|
|
};
|
|
|
|
/* Per CPU container for runtime CPPC management. */
|
|
struct cppc_cpudata {
|
|
struct cppc_perf_caps perf_caps;
|
|
struct cppc_perf_ctrls perf_ctrls;
|
|
struct cppc_perf_fb_ctrs perf_fb_ctrs;
|
|
unsigned int shared_type;
|
|
cpumask_var_t shared_cpu_map;
|
|
};
|
|
|
|
#ifdef CONFIG_ACPI_CPPC_LIB
|
|
extern int cppc_get_desired_perf(int cpunum, u64 *desired_perf);
|
|
extern int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf);
|
|
extern int cppc_get_highest_perf(int cpunum, u64 *highest_perf);
|
|
extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs);
|
|
extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls);
|
|
extern int cppc_set_enable(int cpu, bool enable);
|
|
extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps);
|
|
extern bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu);
|
|
extern bool cppc_perf_ctrs_in_pcc(void);
|
|
extern unsigned int cppc_perf_to_khz(struct cppc_perf_caps *caps, unsigned int perf);
|
|
extern unsigned int cppc_khz_to_perf(struct cppc_perf_caps *caps, unsigned int freq);
|
|
extern bool acpi_cpc_valid(void);
|
|
extern bool cppc_allow_fast_switch(void);
|
|
extern int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data);
|
|
extern int cppc_get_transition_latency(int cpu);
|
|
extern bool cpc_ffh_supported(void);
|
|
extern bool cpc_supported_by_cpu(void);
|
|
extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);
|
|
extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val);
|
|
extern int cppc_get_epp_perf(int cpunum, u64 *epp_perf);
|
|
extern int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable);
|
|
extern int cppc_set_epp(int cpu, u64 epp_val);
|
|
extern int cppc_get_auto_act_window(int cpu, u64 *auto_act_window);
|
|
extern int cppc_set_auto_act_window(int cpu, u64 auto_act_window);
|
|
extern int cppc_get_auto_sel(int cpu, bool *enable);
|
|
extern int cppc_set_auto_sel(int cpu, bool enable);
|
|
extern int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf);
|
|
extern int amd_get_boost_ratio_numerator(unsigned int cpu, u64 *numerator);
|
|
extern int amd_detect_prefcore(bool *detected);
|
|
#else /* !CONFIG_ACPI_CPPC_LIB */
|
|
static inline int cppc_get_desired_perf(int cpunum, u64 *desired_perf)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_nominal_perf(int cpunum, u64 *nominal_perf)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_highest_perf(int cpunum, u64 *highest_perf)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_enable(int cpu, bool enable)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline bool cppc_perf_ctrs_in_pcc_cpu(unsigned int cpu)
|
|
{
|
|
return false;
|
|
}
|
|
static inline bool cppc_perf_ctrs_in_pcc(void)
|
|
{
|
|
return false;
|
|
}
|
|
static inline bool acpi_cpc_valid(void)
|
|
{
|
|
return false;
|
|
}
|
|
static inline bool cppc_allow_fast_switch(void)
|
|
{
|
|
return false;
|
|
}
|
|
static inline int cppc_get_transition_latency(int cpu)
|
|
{
|
|
return -ENODATA;
|
|
}
|
|
static inline bool cpc_ffh_supported(void)
|
|
{
|
|
return false;
|
|
}
|
|
static inline int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_epp_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls, bool enable)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_epp_perf(int cpunum, u64 *epp_perf)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_epp(int cpu, u64 epp_val)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_auto_act_window(int cpu, u64 *auto_act_window)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_auto_act_window(int cpu, u64 auto_act_window)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_get_auto_sel(int cpu, bool *enable)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int cppc_set_auto_sel(int cpu, bool enable)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf)
|
|
{
|
|
return -ENODEV;
|
|
}
|
|
static inline int amd_get_boost_ratio_numerator(unsigned int cpu, u64 *numerator)
|
|
{
|
|
return -EOPNOTSUPP;
|
|
}
|
|
static inline int amd_detect_prefcore(bool *detected)
|
|
{
|
|
return -ENODEV;
|
|
}
|
|
#endif /* !CONFIG_ACPI_CPPC_LIB */
|
|
|
|
#endif /* _CPPC_ACPI_H*/
|