PriMoThemes — now s2Member® (official notice)
This is now a very OLD forum system. It's in READ-ONLY mode.
All community interaction now occurs at WP Sharks™. See: new forums @ WP Sharks™
I noticed that when you let S2M re-create both (download & Stream) using CloudFront and CNAMES, something I suppose have to do with AWS, it disables one of them and the other looses it's info. I would have manually to disable and delete both instance which takes almost 30 min, then S2M will be able to re-create both without any errors.
{
"Version": "2008-10-17",
"Id": "7756bb18649fb598206XXXXXXe7765b2",
"Statement": [
{
"Sid": "s2Member/CloudFront",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1IDUXXXXXXX04Y"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket name/*"
}
]
}
drbyte wrote:Hi Jason
Did not get a notification on the update of this thread..Sorry
OK, I am still getting this error in new Installation
Error code: 400. Error Message: Unable to update existing Amazon® S3 ACLs. Unable to update existing Amazon® S3 Bucket Policy. Bad Request
I checked every single field including the private key.
It keep giving me the error above. It takes about 15 min with Amazon console but I'm guessing with S2m it's about 30 min. 15 min for each distribution.
It creates the distribution (download and stream) correctly. But It's unable to create the bucket policy.
I have this on one that is working on another site:
{
"Version": "2008-10-17",
"Id": "7756bb18649fb598206XXXXXXe7765b2",
"Statement": [
{
"Sid": "s2Member/CloudFront",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1IDUXXXXXXX04Y"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket name/*"
}
]
}
I tried creating a new policy to the new bucket by copying the above and changing the CF Origin Access Identity and The bucketname. No luck. Can I use the same identity and the ID above or those needs to be different Jason
I'm not sure what's the problem but I'm guessing if I did delete and try to recreate the distribution more than 3 time, then I would have to wait about 2 and half hours before I try again. maybe more.
Thank You
Sam
<?php if (current_user_can("access_s2member_level1")){ ?>
Content for Members who CANNOT access Level #1 on this Blog.
<?php } else if(current_user_can_for_blog(ID, "access_s2member_level1")) { ?>
Your Stuff in here
<?php } else if(is_user_not_logged_in()) { ?>
Some public content.
<?php } ?>
array (
'option_value' => '',
'option' => 'pro_recaptcha_private_key',
's3c' =>
array (
'bucket' => 'xxxxxxxxxxxxx',
'access_key' => 'xxxxxxxxxxxxxx',
'secret_key' => 'xxxxxxxxxxxxxxxxxxx',
),
'cfc' =>
array (
'distros_s3_access_id' => '455e7e83exxxxxxxxxxxxxx5cc617baf7ddc08xxxxxxxxxxxxxxxxxxxxxxxxxxx9876xxxxxxxxf6802f0',
),
's3_date' => 'Wed, 07 Dec 2011 09:17:47 GMT',
's3_location' => '/?policy',
's3_domain' => 'xxxxxxxxxxxxxxxxx.s3.amazonaws.com',
's3_signature' => 'gv+cxxxxxvtG5Rxxxxxxxxxxxxxxxxxx=',
's3_args' =>
array (
'method' => 'PUT',
'body' => '{"Version":"2008-10-17","Id":"xxxxxxxbb1xxxxxxxxxx65b2","Statement":[{"Sid":"s2Member/CloudFront","Effect":"Allow","Principal":{"CanonicalUser":"455e7exxxxxxxxxxx17baf7xxxxxxxx7ed841c724a861c129xxxxxxxxxxx4f6xxxxxxxxxxx2f0"},"Action":"s3:GetObject","Resource":"arn:aws:s3:::xxxxxxxx/*"}]}',
'headers' =>
array (
'Host' => 'xxxxxxxx.s3.amazonaws.com',
'Content-Type' => 'application/json',
'Date' => 'Wed, 07 Dec 2011 09:17:47 GMT',
'Authorization' => 'AWS AxxxxxxxxxxQQ:gvxxxxxxxxxFpCSwxxxxxxxxxx90i8s=',
),
),
's3_response' =>
array (
'code' => 200,
'message' => 'OK',
'headers' =>
array (
'x-amz-id-2' => 'sn2wClZ0xxxxxxxxxxxxxxCLxxxxxxxx99xxxxxxxxxxxxxxU6iXg',
'x-amz-request-id' => 'FA8xxxxxxx56676xxxxxxxDAC',
'date' => 'Wed, 07 Dec 2011 09:17:50 GMT',
'content-length' => '0',
'connection' => 'keep-alive',
'server' => 'AmazonS3',
),
'body' => '',
'response' =>
array (
'headers' =>
array (
'x-amz-id-2' => 'sn2wClZ06ExxxxxxxxxxfHvMWa5xxxxxxxxxxLSK29xxxxxxxxxxU6iXg',
'x-amz-request-id' => 'Fxxxxxxxx56676xxxxxxxxC',
'date' => 'Wed, 07 Dec 2011 09:17:50 GMT',
'content-length' => '0',
'connection' => 'keep-alive',
'server' => 'AmazonS3',
),
'body' => '',
'response' =>
array (
'code' => 200,
'message' => 'OK',
),
'cookies' =>
array (
),
'filename' => NULL,
),
),
's3_owner_tag' =>
array (
0 => '<Owner><ID>80d89cf4xxxxxxxxxxxxxxe718xxx2e3exxxxxxxxxxxxb145c200</ID><DisplayName>xxxxxxxxx</DisplayName></Owner>',
1 => '<ID>80d8xxxxxx7a748xxxxxxxxxx7xxxx3e2ecxxxxxxxxxxxx145c200</ID><DisplayName>xxxxxxxx</DisplayName>',
),
's3_owner_id_tag' =>
array (
0 => '<ID>80d8xxxx4790c57a7xxxxxxxxxxxc7296xxxxxxxxxxx5c200</ID>',
1 => '80dxxx4790c5xxxxxxxxxxx7xxxxx2e3e2xxxxxxxxxa2b145c200',
),
's3_owner_display_name_tag' =>
array (
0 => '<DisplayName>xxxxxxxxxx</DisplayName>',
1 => 'xxxxxxxxx',
),
's3_owner' =>
array (
'access_id' => '80xxxxxxxxcf4790c5xxxxxxxxxxxxxxxxe2ec7xxxxxxxxxxx145c200',
'display_name' => 'xxxxxxxxxxx',
),
's3_acls_xml' => '<AccessControlPolicy><Owner><ID>80d89cf4xxxxxxxxxxxxxxxx71xxx12exxxxxxxxxx6a2b145c200</ID><DisplayName>xxxxxxx</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>80d8xxxxxxxxxxxxxe718712e3e2xxxxxxxxxxxx45cxxxx00</ID><DisplayName>xxxxxxxxx</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>455exxxxxxxxxxx7e46bcf4667axxxxxxxxxxxxxxxxee22177dexxxxxxxxxxxxxxxf6802f0</ID><DisplayName>s2Member/CloudFront</DisplayName></Grantee><Permission>READ</Permission></Grant></AccessControlList></AccessControlPolicy>',
's3_policy_json' => '{"Version":"2008-10-17","Id":"7xxxxxxx56bxxxxxxxxxxxxxxxxxx6xxxxxx2","Statement":[{"Sid":"s2Member/CloudFront","Effect":"Allow","Principal":{"CanonicalUser":"455xxxxxxxxx667a5c36c617baf7xxxxxxxxxxxxxxx129ee2xxxxxxx2cxxxxxxxx02f0"},"Action":"s3:GetObject","Resource":"arn:aws:s3:::xxxxxxxxxxxxxx/*"}]}',
)
<div id="jw-container"></div>
<script type="text/javascript" src="/jwplayer/jwplayer.js"></script>
<?php
$cfg = array ("file_download" => get_post_meta(get_the_ID(), "movie", true), "url_to_storage_source" => true, "count_against_user" => true); ?>
<?php if (($mp4 = s2member_file_download_url ($cfg, "get-streamer-array"))) { ?>
<script type="text/javascript">
jwplayer("jw-container").setup({modes: /* JW Player. */
[
/* First try real-time streaming with Flash player. */
{type: "flash", provider: "rtmp", src: "/jwplayer/player.swf",
config: {streamer: "<?php echo $mp4["streamer"]; ?>", file: "<?php echo $mp4["file"]; ?>"}},
/* Else, try an HTML5 video tag. */
{type: "html5", provider: "video",
config: {file: "<?php echo $mp4["url"]; ?>"}},
],
autostart: true,
controlbar: "bottom",
skin: "http://www.site.com/glow.zip",
/* Set video dimensions. */ width:480, height: 320
});
</script>
/* Else, try an HTML5 video tag. */
{type: "html5", provider: "video",
config: {file: "<?php echo $mp4["url"]; ?>"}},
<OperationUsage>
<ServiceName>AmazonS3</ServiceName>
<OperationName>GetObject</OperationName>
<UsageType>DataTransfer-Out-Bytes</UsageType>
<Resource>xxxxxxxxxx</Resource>
<StartTime>12/04/11 04:00:00</StartTime>
<EndTime>12/04/11 05:00:00</EndTime>
<UsageValue>19750475516</UsageValue>
</OperationUsage>
That's 18.3940637074411 18GB of transfer
<OperationUsage>
<ServiceName>AmazonS3</ServiceName>
<OperationName>GetObject</OperationName>
<UsageType>DataTransfer-Out-Bytes</UsageType>
<Resource>xxxxxxxxx</Resource>
<StartTime>12/03/11 13:00:00</StartTime>
<EndTime>12/03/11 14:00:00</EndTime>
<UsageValue>57150542691</UsageValue>
</OperationUsage>
That's 57150542691 53GB of transfer ..Impossible
I believe all sub sites are copying the main site credentials of S3 and CloudFront. Meaning it's not able to recreate the bucket policy because there is one present and it's not able to change it becuase it belongs to the parent site. I think this problem only exist if you set the multisite configured as a sub directory not as a sub domain. Meaning http://www.sub.site.com Vs. http://www.site.com/subsite
<?php
add_filter ("ws_plugin__s2member_options_before_checksum", "s2_site_options"); function s2_site_options (&$options = array ())
{
if (is_multisite () && is_array ($site_options = get_site_option ("ws_plugin__s2member_options")))
foreach ($site_options as /* Use global Amazon® config. */ $key => $value)
if (preg_match ("/^amazon_(?:s3|cf)_files_/", $key))
$options[$key] = $value;
/**/
return /* Options by reference. */ $options;
}
?>
! | With this file in place, there is no need to configure Amazon S3/CloudFront on any of your other Child Blogs in the same Network. All existing and/or future Child Blogs will essentially come pre-configured with your current configuration on the Main Site, with respect to Amazon S3/CloudFront. Some might see this as a great time-saver. WARNING: checking the box in the s2Member UI panel, to re-configure your Amazon/CloudFront Distributions, on any other Child Blog in the Network ( or on any other remote installation of WordPress, for that matter ), will effectively destroy what you've accomplished. Don't do it. Auto-configure your Amazon S3/CloudFront Distributions on the Main Site of your Network only. All other Child Blogs in the Network will use that configuration, and should NOT be re-configured again. If you do this by accident, go back to your Main Site and re-run s2Member's auto-configuration routines all over again. Child Blogs will inherit their configuration from the Main Site. |
I'm not sure, but it sounds like something is preloading somewhere. You might check with JWPlayer to see if there are any known bugs in this regard. Otherwise, you said that you were moving files around? Is it possible that there are redirects involved somehow, causing files to be downloaded inadvertently?Form the 1st to the 4th. I got about 600GB of AWS data transfer out. all what I was doing is changing my files from using Wowza to AWS. I was viewing the post for seconds of a time to check if the movie is playing correctly. But not more that 5 second at at time
But after taking this out form the code above
- Code: Select all
Code: Select all
/* Else, try an HTML5 video tag. */
{type: "html5", provider: "video",
config: {file: "<?php echo $mp4["url"]; ?>"}},
Until we have this issue resolved, here are some possible solutions:
1. Use only ONE Bucket for each instance of s2Member ( problem solved ).
Jason Caldwell wrote:Update: This patch has been revised on the advice of two beta testers.
If you downloaded the previous patch file and still had trouble, please update to this latest patch please.
Users browsing this forum: Google [Bot], Yahoo [Bot] and 1 guest