I don't know how to explain this well, but I will try.
我不知道该怎么解释,但我会试一试。
- I use Google Cloud SQL second edition with 20 GB disk size.
- 我使用谷歌Cloud SQL第二版,有20gb的磁盘大小。
- I have several wp. databases with 166.5 MB Total size.
- 我有几个wp。数据库的总大小为166.5 MB。
And right now my Storage usage is 9.52 GB! (With 166.5 MB SQL data...) and increasing still going faster... What to do?!
现在我的存储空间是9.52 GB!(有166.5 MB的SQL数据…)而且还在以更快的速度增长……要做什么? !
UPDATE :
更新:
I solve this with :
我用:
- I made an export in bucket
- 我在桶里做了出口。
- I created a new instance Cloud SQL
- 我创建了一个新的实例云SQL
- Import from bucket
- 从桶进口
- And delete instance with problem.
- 并删除有问题的实例。
(And changed ip from my applications)
(以及修改我的应用程序的ip)
I don't know sure where problem come from but could be a "Storage overhead from binary logs". Next time will check binnary logs with : mysql> SHOW BINARY LOGS;
我不知道问题来自哪里,但可能是“二进制日志的存储开销”。下次将检查二进制日志:mysql>显示二进制日志;
What I think Google is missing is a purge binary logs! (an easy way!)
我认为谷歌缺少的是清除二进制日志!(一个简单的方法!)
UPDATE FINAL :
最后更新:
With binary logs active, stoarge of your cloud sql will expand continuously. For anyone in the same situation, you can edit the instance and uncheck binary logs, after that the current binary logs will purge.
当二进制日志处于活动状态时,云sql的stoarge将不断扩展。对于处于相同情况的任何人,您都可以编辑实例并取消对二进制日志的检查,然后当前的二进制日志将被清除。
Sorry for my noob problem! :D (I'm a beginner in Server administration.)
对不起,我出了点问题!:D(我是服务器管理的初学者)
Thanks Vadim!
谢谢瓦迪姆!
1 个解决方案
#1
6
If you have binary logs enabled, mysql will make a record of all changes, which is required for replication or point-in-time recovery.
如果启用了二进制日志,mysql将记录所有更改,这是复制或点时间恢复所需的。
If you have no need for these features, you can disable binary logs which will purge any existing logs from your instance.
如果不需要这些特性,可以禁用二进制日志,这将清除实例中的任何现有日志。
If binary logs are enabled, they will not grow indefinitely. Binary logs older than the oldest automatic backup (7 days) are purged automatically.
如果启用了二进制日志,它们将不会无限增长。比最老的自动备份(7天)更老的二进制日志将被自动清除。