我有两个NSInputStream
并NSOutputStream
通过网络相互连接。我想将核心数据对象和相关图像从一台设备传输到另一台设备。我已成功将核心数据对象转换为JSON
并传输到流的另一端,然后从JSON
. 现在有与每个记录相关联的图像。图像在光盘上,只有路径存储在核心数据对象中。现在,当您写入output stream
. 我已经准备好了XML
(包含JSON
)。
1. 但是如何将图像 ( NSData *
) 与XML
(also NSData *
) 一起传输?我将如何在阅读端 ( NSInputStream
) 区分 XML 和图像?
2.另外,我要传输多张图片,最后我们如何判断NSInputStream
一张图片的字节已经完成,下一张图片的字节已经开始?
3.我们怎么知道哪个图像(名称)被转移了?
谢谢
问问题
632 次
3 回答
0
将NSData
(每个UIImage
)转换为NSString
表示,然后将所有NSString
对象放入NSDictionary
并序列化该字典。这样,当您传输数据时,您可以反转过程以提取图像,知道哪些关键点指向哪个图像。这样,您应该能够传输多个图像。
希望这可以帮助。
干杯
于 2013-02-06T12:27:58.807 回答
0
我通过以下步骤解决了这个问题:
1. 将每个托管对象转换为NSDictionary
2. 将所有字典放入NSArray
3. 转换NSArray
为NSData
使用NSKeyedArchiver
4.NSData
通过流
传输
在接收器端,我颠倒了上述步骤。
谢谢马吕斯·库尔戈纳斯
于 2013-02-08T07:07:11.620 回答
0
听起来很荒谬,每个答案。尝试这样的事情:
case NSStreamEventHasBytesAvailable: {
NSLog(@"NSStreamEventHasBytesAvailable");
uint8_t * mbuf[DATA_LENGTH];
mlen = [(NSInputStream *)stream read:(uint8_t *)mbuf maxLength:DATA_LENGTH];
NSLog(@"mlen == %lu", mlen);
[mdata appendBytes:(const void *)mbuf length:mlen];
NSLog(@"mdata length == %lu", mdata.length);
if (mlen < DATA_LENGTH) {
NSLog(@"displayImage");
UIImage *image = [UIImage imageWithData:mdata];
[self.peerConnectionViewController.view.subviews[0].layer setContents:(__bridge id)image.CGImage];
mdata = nil;
mlen = DATA_LENGTH;
mdata = [[NSMutableData alloc] init];
}
} break;
...
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
NSData *data = [NSData dataWithData:UIImageJPEGRepresentation(image, 0.25)];
__block BOOL baseCaseCondition = NO; // obviously this should be data driven, not hardcoded
__block NSInteger _len = DATA_LENGTH;
__block NSInteger _byteIndex = 0;
typedef void (^RecursiveBlock)(void (^)());
RecursiveBlock aRecursiveBlock;
aRecursiveBlock = ^(RecursiveBlock block) {
NSLog(@"Block called...");
baseCaseCondition = (data.length > 0 && _byteIndex < data.length) ? TRUE : FALSE;
if ((baseCaseCondition) && block)
{
_len = (data.length - _byteIndex) == 0 ? 1 : (data.length - _byteIndex) < DATA_LENGTH ? (data.length - _byteIndex) : DATA_LENGTH;
//
NSLog(@"START | byteIndex: %lu/%lu writing len: %lu", _byteIndex, data.length, _len);
//
uint8_t * bytes[_len];
[data getBytes:&bytes range:NSMakeRange(_byteIndex, _len)];
_byteIndex += [self.outputStream write:(const uint8_t *)bytes maxLength:_len];
//
NSLog(@"END | byteIndex: %lu/%lu wrote len: %lu", _byteIndex, data.length, _len);
//
dispatch_barrier_async(dispatch_get_main_queue(), ^{
block(block);
});
}
};
if (self.outputStream.hasSpaceAvailable)
aRecursiveBlock(aRecursiveBlock);
}
于 2017-09-21T23:15:05.070 回答