问题标签 [my.cnf]
For questions regarding programming in ECMAScript (JavaScript/JS) and its various dialects/implementations (excluding ActionScript). Note JavaScript is NOT the same as Java! Please include all relevant tags on your question; e.g., [node.js], [jquery], [json], [reactjs], [angular], [ember.js], [vue.js], [typescript], [svelte], etc.
mysql - Recommendations for tuning my.cnf / php.ini and Partitioning for a dataset with poor indices, a lack of PKs
We have been experimenting a while now with a particularly challenging and very large dataset we came upon with which there are very few ways I have found to create effective indices and primary keys (barring a wholly radical redesign of the database which is not an economical option at this point). I am looking for suggestions on how to alter either queries or table structure (partitioning etc) - long story short we end up with a lot of time consuming cartesian joins.
Here is the nitty gritty:
I have 3 key example tables here but sometimes join 2-3 similar to result to these:
If I indicated a (??) I am scratching my head as to (was this really the best way to set this up? please chime in)
samples - our main table of concern as it handles all our sample case demographics Fields:
- sampleid (NOT PK but is UNIQUE ??) varchar (255) - most data are 10 digit integer (??), this is a unique ID through out the database for a certain report.
- case - varchar (255) - again most are 10-12 digit integers (??), this is a second form of unique id, BUT a case value of 1000001 may have 1-20 sampleid associated with it in other tables (more later) to provide sequential/chronological information. (like a journal)
adj_samples
Contains expanded/annotated data beyond samples, is linked to samples by SampleID
Fields:
RecordID (PK) just an autonum keeping count on records (??).
SampleID linked to to the samples table in one sample id-> to many adj_case records -- as a sample may have several annotations notes, or other minuatie associated with it which is the purpose of the adj_case table.
ProbableID - int, just an internal code info code for us.
result Fields:
- SampleID varchar 255 (again probably can be int)
- result (from what I have seen to be limited to 100 len char fields, but field len set at 255 varchar)
Basically linked to adj_samples by SampleID one DISTINCT(SampleID) to many result, result fields indicating different levels of information more detailed than others, varchar 50-100 char in len (NOT 255 varchar?)
Table Sizes
A sample query would be to give us all of the case counts for a given internal ID (probableID)
We sometimes have to join on several tables similar to results afterwards as there are other tables containing pertinent information - not uncommon to find 5-6 stacked up and it goes totally cartesian. We've indexed the best we could, but we are dealing with so many varchars that could be keys (results.result is an index but is 100-255 chars long!).
I wonder about the strange unused field in samples being the PK, seems to me SampleID ought to be the PK as they supposed to be unique but perhaps duplicates were introduced by error?
I am looking for something like a partioning strategy and just generally thinking outside the box to get this going. This information doesn't have much in the way of numeric codes and one-to-one tables to use as intermediate index tables.
So here is my my.cnf if it is at all helpful as we have major performance problems, the box is an 8 core intel dedicated centos5.5 with 16GB of RAM. I find that is often times has to write to disk on these large joins. Again first thing I think I should deal with is proper field sizes for the data we are storing, var 255 for a 10 digit integer seems like a waste
Will excessive field lengths beyond what you really need affect performance via table size?
Picture of db schema is also attached
With explain: I really bite the bullet on the initial Adj_Samples in the explaination - it goes to Using where; Using temporary; Using Filesort then another where on result using where on 4 rows. all are of type ref.
Here is some my.cnf:
Thanks for all your help all, I am 6 months into learning mysql and have learned a lot but am looking forward to learning more from you all on this exercise.
In bash: top I am seeing my mysql process hit 30% memory only but rack out at 200-400% of CPU is this normal or is my my.cnf screwy to top all this off?
mysql - EC2实例上的Mysql:第一次查询后结果很慢
我正在尝试在具有 613MB 内存的 Amazon EC2 微型实例上配置 mysql。这个实例将只用于运行 mysql,所以我想使用尽可能多的内存。我们在不同的主机上运行了同一数据库的另一个实例,因此我可以轻松比较结果。
在原始数据库上执行一个普通的查询不到 3 秒。在我对 EC2 进行更改之前,需要 46 秒,但现在,更改设置后,相同的查询只需要 4 秒。但是,再次执行相同的查询,似乎需要很长时间。
这是我在 MySql my.cnf 中使用的设置:
我不认为应该包含 myisam 参数,因为它应该都使用 innodb,但我只是给了它一些额外的内存以防万一。
如果它在第一次运行时较慢,然后运行得更快,那将是合乎逻辑的,但这似乎不合逻辑。
任何想法都会非常受欢迎。
mysql - 拒绝mysql中的某些查询
我在代码中找不到触发查询的位置,想知道是否有 mysql 配置会在触发时拒绝该查询?例如,在我的情况下UPDATE table SET col1 = NULL, col2 = NULL, col3 = NULL
......请帮助!
mysql - my.cnf ram 每 4 小时吃一次 1 克 ram
请帮忙
在我运行了tuning-primer.sh 之后
请帮助我对 ram 有问题它每 4 小时吃 1 G ram 后我从 24G 使用 18G
请任何人帮助我配置 my.cnf 我有 16 个核心 cpu
mysql : 5.5.22
mysql - MySQL my.cnf 性能调优建议
我有点希望有人可以提供一些帮助来优化 my.cnf 文件以用于超大容量的 mysql 数据库服务器。
我们的 Web 应用程序同时被大约 300 个客户使用。我们需要对 my.cnf 进行调整,以便为该基础架构提供最佳性能。
我完全知道索引和优化查询是其中的一个主要因素,但我们希望从一个配置正确的系统开始,然后相应地系统地重新设计我们的查询。
有什么建议么?谢谢各位。
由 RolandoMySQLDBA 编辑
由于您的所有数据都是 MyISAM,请运行此查询并显示输出
@ Rolando - 谢谢...该查询的结果是 4G。
mysql - 无法在 my.ini 文件/(my.cfn 文件)中将自动提交设置为 0
我正在使用 wamp 服务器,所以 my.ini 文件是新的 my.cfn 文件。我用编辑器打开了文件,然后向下滚动直到看到
之后我输入了
然后我保存了文件重新启动我的 wamp 服务器启动了 mySQL concol 并输入了命令
我得到了@@autocommit | 1
,但据我了解,如果自动提交关闭,它应该为零。
有谁明白怎么回事??
mysql - 24GB 服务器的 MySQL 内存不足
我遇到了一个烦人的问题。
我已经制作了一个系统,现在用户告诉我它正在向他们传递信息:
内存不足(需要 268435427 字节)
整个数据库大小为 12MB,出现问题的查询已经运行了好几个月,而且并没有那么复杂或很大。
数据库是innodb。我的服务器有 24GB 的内存,所以我严重怀疑它实际上内存不足。
my.cnf 如下:
key_buffer = 8000M
max_allowed_packet = 1M
table_cache = 2048M
排序缓冲区大小 = 1M
net_buffer_length = 1024M
读取缓冲区大小 = 1M
read_rnd_buffer_size = 24M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
innodb_buffer_pool_size = 1024M
innodb_additional_mem_pool_size = 2M
最大连接数 = 100
query_cache_size = 128M
query_cache_min_res_unit = 1024
query_cache_limit = 16MB
线程缓存大小 = 100
max_heap_table_size = 4096MB
在 Windows 任务管理器中查看时,我看到 18.8GB 可用,但只有 100MB 可用。它是Windows 2008 64bit Server,这可能是问题的根源吗?
这是查询:
如果我尝试将 PHP 执行内存设置为 3.5GB 以上,Apache 将无法启动(我使用的是 xampp)。我必须使用 32 位版本的 PHP?这与 INNODB_BUFFER_POOL_SIZE 相同,我希望为 14GB,但如果我这样做,mysql 将无法启动。
mysql - mySQL:编辑/保存 my.cnf 文件
我找到了随 提供的示例配置文件mysql 5.0
,但是当我编辑它们时无法保存它们。我也尝试使用不同的文件名进行保存,但我收到一条错误消息,windows
说我无权将文件保存在该位置 ( I am an administrator
)。
我需要编辑配置文件,以便能够保存更多数据。我真的不知道该怎么做,并且从几个小时以来一直被这个问题所困扰。谁能弄清楚发生了什么?
php - 使用 vi 编辑 MySQL 的 my.cnf 时出现问题
我有一个问题:我有一个 MySQL 脚本获取结果并允许客户通过 Excel 导出查看他们的结果。但是,当网站显示更多结果时,电子表格仅限于 50 个结果,有时最多 1500 个。
我已经搜索过,看来我必须编辑/etc/my.cnf
. 我尝试对其进行编辑并粘贴更多代码,但是当我退出插入模式时,一半的代码被切断 - 我错过了什么吗?
这就是我当前的全部内容/etc/my.cnf
——我意识到这是默认设置。我是 vi 编辑的新手,所以请多多包涵。
mysql - Centos 6 64bit 启用 my.cnf
有人可以告诉我如何my.cnf
在 CentOS 6 中启用选项吗?
我有所有预期的.cnf
文件,/usr/share/mysql
但它们根本不起作用。